专利摘要:
system and method for product identification is a system and method for identifying an object that includes a plurality of object sensors, each object sensor being configured and arranged to determine at least one parameter that describes custom objects that are relatively moved relative to a sensing volume and that have a known position and attitude relative to the sensing volume. a location sensor is configured and arranged to produce position information regarding relative motion. the outputs of the object and the location sensors are passed to a processor and the parameters are associated in respective objects among the objects based on the position information and based on the known positions and attitudes of the sensors. for each object that has associated parameters, the processor compares the parameters to known item parameters to assign item ID to the object.
公开号:BR112012022984A2
申请号:R112012022984-2
申请日:2011-03-14
公开日:2020-08-25
发明作者:Brett Bracewell Bonner;Cameron Dryden;Andris J. Jankevics;Hsin-Yu Sidney Li;Torsten Platz;Michael David Roberts;Pirooz Vatan;Justin E. Kolterman
申请人:Sunrise R&D Holdings, Llc;
IPC主号:
专利说明:

gd 1 “SYSTEM AND METHOD FOR PRODUCT IDENTIFICATION” This order claims priority over the Interim Application under US 61/430,804 filed on January 7, 2011 and Interim Order under US 61/313,256 filed on March 12, 2010, each of which is incorporated by reference in their entirety herein.
TECHNICAL FIELD The description herein refers generally to methods and systems for identifying items and more particularly for identifying items that pass through a detection volume. ! Background lo 15 In a variety of environments, it can be useful to identify objects and to read encoded information related to those objects. For example, point-of-sale (POS) systems use barcode scanners to identify products to be purchased. Likewise, shipping, logistics and mail sorting operations make use of automated identification systems. Depending on the context, encoded information may include prices, destinations, or other information related to the object on which the code is placed. In general, it is useful to reduce many errors and exceptions that require human intervention in the operation.
SUMMARY Implementations of various approaches to item identification and code reading are described in this document.
F One aspect of an embodiment includes a method that includes determining at least one parameter that describes objects as they are relatively moved relative to a detection volume using a sensor that has a known position and attitude to the detection volume , generate location information with respect to relative motion, and pass the parameters and position information to a processor, and associate the parameters with the objects' respective parameters based on the position information and based on the known position and attitude of the sensors, and for each object that has parameters associated — compare the parameters with known item parameters to assign item ID to the object. 4 One aspect of an embodiment includes a system. which includes a plurality of sensors, each sensor configured and arranged to determine at least one parameter describing objects as they are relatively moved relative to a sensing volume using a sensor having a known position and attitude with respect to the volume detection, a location sensor, configured and arranged to produce location disorders with respect to relative motion, and a processor, configured to receive the parameters and to stop associating them with the objects' respective parameters based on position information and based on on the known position and attitude of the sensors and to compare the parameters with known item parameters to assign item identification to the object.
One aspect of an embodiment of the invention includes a system for asynchronously identifying an item within a sensing volume and includes a plurality of object sensors, each object sensor configured and arranged to determine at least one parameter describing objects. as they are relatively moved relative to the sensing volume, and having a known position and attitude relative to the sensing volume.
The system includes a position sensor, configured and arranged to produce position information with respect to relative motion, where the position information does not comprise system clock information, and a processor, configured and arranged to receive parameters from the object sensors. and stop associating the parameters with the objects' respective parameters based on w on the position information and based on the known position and attitude of the object sensor that determined each respective parameter, without considering system clock information, and for, for each object that has at least one associated parameter, compare the at least one associated parameter with known item parameters to assign an item ID to the object.
One aspect of an embodiment of the invention includes a method of asynchronously identifying an item within a sensing volume that includes determining at least one parameter that describes objects as they are relatively moved relative to the sensing volume, using a plurality of object sensors, each of which has a known position and attitude with respect to the sensing volume.
The method includes producing information from
: position with respect to relative motion, where the position information does not comprise system clock information, and associate the parameters with respective parameters of the objects based on the position information and based on the known position and attitude of the object sensor that determined each respective parameter, without regard to system clock information, and for, for each object that has at least one associated parameter, to compare the at least one associated parameter with known item parameters to assign an item ID to the object.
One aspect of one embodiment includes tangible machine-readable medium encoded with machine-executable instructions for performing a method as described herein or for controlling an apparatus or system as described herein.
The summary section above is provided to introduce a selection of concepts and a simplified form which are further described below in the detailed description section. The summary is not intended to identify key features or essential features of the claimed matter, nor is it intended to be used to limit the scope of the claimed matter. Furthermore, the subject matter claimed is not limited to deployments that resolve any or all of the drawbacks in any part of this disclosure.
n BRIEF DESCRIPTION OF THE DRAWINGS These and other features will be — better understood — in relation to the following description, pending claims and accompanying drawings in which: 5 Figure 1 schematically illustrates one embodiment of an item identification system; Figure 2A is an oblique view of one embodiment of an item identification system; Figure 2B is an oblique view of the system of Figure 2A; Figure 3A is an oblique right side view of one embodiment of an item identification system; Figure 3B is a top plan view of one embodiment of an item identification system; '5 Figure 3C is a right elevation view of . an embodiment of an item identification system; Figure 4A is a left elevation view of one embodiment of an item identification system; Figure 4B is an oblique left side view of one embodiment of an item identification system; Figure 5A is an oblique sectional left side view of one embodiment of an item identification system; Figure 5B is a sectional left elevation view of one embodiment of an item identification system; Figure 6A is a sectional left elevation view of one embodiment of an item identification system;
Figure 6B is an oblique sectional top view of one embodiment of an item identification system; Figure 7A is an oblique sectional left side view of one embodiment of an item identification system; Figure 7B is a sectional left elevation view of one embodiment of an item identification system; Figures 8 to 12 are data flow diagrams illustrating data flow through one embodiment of an item identification system and its subsystems; Figure 13 is a timing diagram illustrating the output of certain sensors in one embodiment: 15 of an item identification system; . Figure 14 is a data flow diagram illustrating the flow of data through one embodiment of a subsystem of an item identification system; and Figure 15 is a data flow diagram illustrating the flow of data through one embodiment of a subsystem of an item identification system.
DETAILED DESCRIPTION Figure 1 schematically illustrates an object identification system 25. One or more items 20 to be identified are placed on a conveyor system to be loaded through a detection volume 240. In the notional embodiment shown here , the conveyor system is a conveyor belt 31. As a matter of practicality, the conveyor system may consist of more than one conveyor belt to allow additional control over the flow of item through the detection volume. In one embodiment, as illustrated in Figure 3A, three belts are used: an in-feed conveyor belt, over which the items to be identified are loaded; a sensing volume conveyor belt which moves items through the sensing volume 240; and an outfeed conveyor belt which takes the items away from the detection volume 240 for further processing. In, for example, a retail environment, "additional processing" may include bagging, reverse logistics processing, and other processing known to those skilled in the art. In some embodiments, the conveyor system only includes the "sensing volume conveyor belt. Other - belts, such as the infeed conveyor belt or the outlet infeed conveyor belt can be added depending on the specific application contemplated.
As illustrated in the schematic diagram of Figure 1, the transport system can be treated as if it were an infinite transport path. As will be described in detail above, in one embodiment, the item identification system may be designed such that the processing algorithms treat each belt segment as if it were a unique location and any item associated with that segment is treated consistent as if you were at that location. In this regard, the item identification system 25 may lack information regarding how or when items ç are placed on the belt and no information regarding what happens after they leave the detection volume.
240. In one embodiment, system 25 may assign linearly increasing location values to each segment of essentially endless conveyor belt 31 as it enters sensing volume 240, the like. The. a street address, and the system can act as if the cus has an unlimited length. An item associated with a particular street address can be assumed to remain there.
Alternatively, instead of moving objects through a fixed detection volume, the volume could be scanned along fixed locations. That is, instead of a conveyor belt 31 moving objects, the sensing volume could be driving down the street looking at the . items distributed in the ever-increasing street address. For example, this could be applied in a warehouse environment where a detection device is driven along aisles and detects items grouped on shelves.
Conveyor belt 31 is equipped with a physical conveyor location sensor 122. Physical conveyor location sensor 122 measures the position of conveyor belt 31 relative to a fixed reference location in the sensing volume of system 25. In in some embodiments the physical transport location sensor 122 is an encoder associated with a roller of the sensing volume conveyor belt. The physical conveyor location sensor 122 produces a pulse every time the conveyor belt is essentially empty.
! end 31 moves a fixed incremental distance with respect to detection volume 240. By way of example, a rotary encoder may include delineations corresponding to 1 mil incremental movements of conveyor belt 31. In principle, each delineation produces a single count. in an ever-increasing accumulation, but in one embodiment, several counts can be aggregated for each system count. As an example, each system count can correspond to five nominal detector counts. Additionally, it may be helpful to be able to account for slippage or other events that could cause the belt to reverse motion. In this regard, such an approach would employ a quadrature encoder at :15 in which a pair of encoder outputs are out of phase one. with the other by 90º. In this approach, a direction can be assigned to the belt movement based on a determination as to which of the two outputs Occurred first. Sensing volume 240 is the volume of space through which the conveyor system transports 05 items 20, and is delineated by the combined visual fields/sensing regions of several item parameter sensors 220, which includes, but is not limited to to item insulator
140.
The Detection Volume 240 includes several parameter sensors 220 to detect items 20 moving through it. Some embodiments have at least two different parameter sensors 220: an item isolator and a marking area reading system which includes a
7 or more marking area sensors.
In embodiments, additional parameter sensors, such as a dimension sensor and/or a weight sensor, may be included.
Parameter Sensors can be understood as the physical sensors, which convert some observable parameter into electrical signals, or the physical sensor in combination with an associated parameter processing function, which transforms raw data (initial detection data) into Digital values used in further processing.
The parameter processors can be co-located and/or embedded with the physical sensors or they can be software modules running in parallel with other modules on one or more general purpose computers. ' 15 In one embodiment, the measured output values . by 220 parameter sensors are transferred to other software modules in the processors.
This transfer MAY be, in one embodiment, asynchronous.
Data from parameter sensors 220 is associated with location information provided by the location sensor transport system and sent to two processing modules: Item description compiler 200, which performs the process of matching all parameter values collected for a particular item to create an item description, and the item identification processor 300, which queries a product description database to try to find a match between the item description and a product, and issues a product ID or an exception flag.
' Optionally, system 25 can include an exception handler (shown in Figure 15).
One embodiment of an item identification system 25 is illustrated in Figure 2A. As shown, a detection volume is inside an upper housing.
28. A lower housing 26 acts as a structural base to support the sensing volume conveyor belt (as shown in Figure 3A), the physical transport location sensor 122, and many of the mechanical and optical components of the system 25, being which include without limitation an upper observation line-scan camera 88. As will be appreciated, a line-scan camera has a substantially flat field of view, although it is not strictly planar in the mathematical sense, but is essentially a thin rectangle that has . a low divergence.
In embodiments, detection volume 240 may be partially enveloped so that the surrounding walls form a tunnel structure. As illustrated in Figure 2A, a tunnel structure is formed by the Upper Housing 28, which provides convenient locations in which elements of the various sensors can be attached, as well as reducing the possibility of unwanted intrusion into the detection volume 240 by hands and miscellaneous objects. . In the embodiment shown in Figure 2A, the upper housing 28 is used as a structural base to support the laser distra generator 119, the area camera 152, the first area camera mirror 48, the second area camera mirror 49, 420 lighting sources, load cells
175, a light curtain generator 12, and various other mechanical and optical components.
The area camera 152 is intended to observe the path of a laser light line, a laser stripe projected downwards towards the transport system and any items therein in its visual field. There is a Known angle between the laser stripe generator li19 and the area camera 152 which causes the image of the laser stripe in the visual field of the area camera 152 to be shifted perpendicular to the laser stripe in proportion to the height of the item. on which the laser stripe is projected.
As illustrated in Figure 2B, a first load cell 175A, a second load cell (not viewed from this perspective), a third load cell 175C'15, and a fourth load cell (not viewed from this perspective) - are positioned to measure a load on the belt. Six line-scan cameras, including, but not limited to, a lower right output power end-line camera 80 and an upper sight line-scan camera 88, are shown mounted in the lower housing. 26 in Figure 2B. In one embodiment, system 25 includes eleven line-scan cameras arranged in various positions and various attitudes to completely cover the detection volume within the upper housing. In one embodiment, each camera has a position and attitude that are sufficiently well known that a location of a detected item can be determined to less than about 4 in. (ie, less than about 1 degree of arc). In relation to this, the cameras can be precisely mounted
7 inside a structural module so that mounting the structural module to an array member of the system provides accurate information regarding the direction in which the camera is pointed. In one embodiment, some or all of the cameras may include a polarization filter to reduce specular reflection from packaging materials which may tend to obscure bar codes. In this configuration, it may be useful to increase the light output of the light sources in order to compensate for light loss due to polarization filters.
Line-scan cameras are structured and arranged so that they have a field of view that includes line-scan camera mirrors. A first lower right output feed endline scan mirror 92 is . shown in Figure 2B, as an example of a line-scan mirror. The first lower right output power end line scan mirror 92 reflects light from other line scan mirrors (shown in Figure 3A) into the lower right output power end line scan camera 80, so that the lower right outfeed end line scan camera 80 produces line scan data on the item when it arrives within its visual field on the detection volume conveyor belt 32 (not visible in Figure 25 , see Figure 324). Also shown in Figure 2B is a lower right-hand observation light source 128.
, In one embodiment, the conveyor belt may be about 50.8 centimeters (20 inches) wide and travel at a speed of about twenty four meters (eighty pas per minute, OW about forty centimeters (sixteen inches) ) per second. As will be appreciated, the travel speed can be selected in accordance with the additional processing operations to be performed on items after identification. For example, a grocery store application may require a relatively slow belt speed to allow a employee perform packaging tasks while a package picking application can allow for higher belt speed as the separated packages can be mechanically handled. Area 152 camera holder, the first area 48 camera mirror, the second area 49 camera mirror, fon lighting fixtures 40, and several of the mechanical and optical components of the system 25.
Figure 3A illustrates right-hand camera optics usable for imaging a first item 20A and a second item 20B. The first item 20A is shown having a front side 21, a top side 22 and a left side.
23. While not shown in Figure 3A, the first item 20A also has a bottom side, a back side and a right side. While illustrated as a box of groceries in the Figures, the first item 20A could take the form of any item suitable for passage through the detection volume in accordance with a selected application.
at
- In the illustrated embodiment, the first item 20A and the second item 20B are transported in the detection volume by an in-feed conveyor belt 30 in the direction of a movement towards the output end of the in-feed conveyor belt 30 and towards the infeed end of the sensing volume conveyor belt 32. The first item 20A and the second item 20B are conveyed through the sensing volume by sensing volume conveyor belt 32 in the direction of movement in the direction from the outlet end of the sensing volume conveyor belt 32 and towards the infeed end of the outlet infeed conveyor belt 34. r 15 Upon entering the sensing volume, objects to be . d identified pass through a light curtain 10 generated by light curtain generator 12 as best seen in Figure 4B.
In the illustrated embodiment, the light curtain 10 is projected downwards towards a slot 36 between the sensing volume conveyor belt 32 and the input feed conveyor belt 30 and is reflected by a mirror 14 to a detector 16. The light curtain generator can be, for example, a bar that includes a linear array of LEDs arranged to provide a substantially flat sheet of light.
The light curtain detector 16 may include a linear array of photodetectors that detect the light curtain projected by the LEDs.
In order to improve spatial resolution and reduce negative photodetector fault readings, the LEDs and detectors are activated sequentially in pairs.
7 This approach tends to reduce the effects of potential stray light from an LED entering detectors despite the presence of an object in the visual field. When an object passes through the curtain, does it It casts a shadow over the photodetectors, : providing information about an object's width as it passes through the light curtain. A series of measurements of this type can be used as a set of parameters to identify the object. In one embodiment, the spatial resolution of the lur curtain generator/detector assembly will be on the order of a few mm, although in principle, finer or coarser measurements may be useful, depending on the application. For the grocery application, finer resolution may be required in order to distinguish similar product packages.
. As seen in Figure 3A, the lighting sources 40 illuminate the detection volume conveyor belt 32. lower right output power end line 92. The first line scan mirror lower right output power end line: 92 reflects light from a second line scan mirror from — inner right output power end 03, O which reflects light from a third lower right output feed end line scanning mirror 94. The third lower right output feed end line scanning mirror 94 reflects light from the detection volume conveyor belt 32. Therefore, the lower right output feed end line scan camera 80 focuses its visual field on the work belt. sensing volume carrier 32, wherein it captures line scan data about the first item 20A and the second item 20B as it is conveyed in the direction of movement along the sensing volume conveyor belt 32. The detection camera is also shown. upper right infeed endline scan 83, which likewise image the detection volume conveyor belt 32. The lower right outfeed endline scan camera 80 is connected in a operatively to an image processor, and " 15 collects the scan data per line. The image processor determines a parameter value of the first item 202 and a parameter value of the second item 20B which is carried through the detection volume.
In one embodiment, the image processor is a marker area reader. After the marking area reader collects the line scan data corresponding to the first item 20A, it attempts to identify the marking area of the first item 24A on the front side 21 of the first item 20A. In the illustrated case, there is no identification code on the front side of the item, so in operation the marking area reader will fail to identify the marking area of the first item 24A based on the front side image. However, the marking area reader, where it receives line scan data from the output feed end line scan camera
: lower right 80 or upper right output feed end line scanning camera 81, can successfully capture and identify the marking area of the second item 24B.
A lower right input power edge scan camera 82 has a field of view focused on a first lower right input power edge scan mirror
95. The first lower right input power end-line scan mirror 95 reflects light from a second lower right input power end-line scan mirror 96, which reflects light from a third line-scan mirror lower right input feed end ' 15 lower 97. The third line scan mirror - lower right input feed end 97 reflects light from the sensing volume conveyor belt 32. Therefore, the line scan camera lower right inlet feed end 82 focuses its visual field on the sensing volume conveyor belt 32, where it captures line scan data about the first item 20A and the second item 20B that is conveyed in the direction of motion along the sensing volume conveyor belt 32. After the marking area reader collects the corresponding line scan data to the first item 20A, this identifies a marking area 24A on the left side 23 of the first item
20. In one embodiment, line scan cameras can be triggered by signals derived from a sensor
" physical transport location to capture one scan data per line once for every five thousand of an inch of conveyor belt travel
32. That is, when using an encoder that has an interval of 1000, every five intervals constitutes a system count, and a scanned image per line will be captured.
Turning to Figure 3B, the right-side camera optics are illustrated and include Mas. do not. are limited to the lower right input power end line scan camera 82 and the lower right output power end line scan camera 80. Right side camera optics capture light from the d0 illumination source reflected from return . 15 for the visual field of right-hand camera opticians: on one or more scanning mirrors per line. The line scan mirrors shown in Figure 3B include the second lower right output power end line scan mirror 93, the third lower right output power end line scan mirror 94, the second scan mirror per lower right input power end line 96, and the third scanning mirror per lower right input power end line 97, although more or fewer mirrors may be included depending on the specific application contemplated. Figure 3B also shows the upper right output feed end line scan camera 81 and the line scan camera
"upper right inlet feed end 83 generates images of the sensing volume conveyor belt 32, and when the inlet infeed conveyor belt 30 delivers the first and second items 20A and 20B to the sensing volume conveyor belt 32, these line scan cameras will image the items as well. Eventually, the first and second items 20A and 20B will be out of sight of the upper right output feed end line scan camera 81 and the scan camera upper right inlet feed end line 83 as they are passed along the out feed conveyor belt 34. In one embodiment, the f scan cameras 15 line may be horizontally to accumulate dust on the camera lens. Folding mirrors can be used to provide selected visual field geometries to allow these cameras to be mounted horizontally watch! jo | * detection volume from * different angles.
To achieve a desired depth of focus for each line-scan camera along with a fine image resolution for reading marking areas, the optical path for each line-scan camera should be several feet from each item 20 in the detection volume. To allow for long optical paths without unduly expanding the size of the system 25, each line-scan camera optical path can be doubled, for example, by line-scan mirrors 93, 94, 96, and 97.
” Because the visual field width for each line-scan camera expands linearly as the optical distance of the line-scan camera increases, the line-scan mirrors that are optically closest to the first item 20A and second item 20B can be wider than the belt width in the line scan direction.
As will be appreciated, for an imaging field at a 45 degree angle to the belt, the field width is V2 times the belt width, and the mirror must be large enough to subtend that field.
However, because each line scan camera only image a narrow line detection volume, about five thousand of an inch in certain embodiments, each line scan mirror can be very short in the perpendicular direction.
In some embodiments, each line-scan mirror is only a fraction of an inch high.
Line scan mirrors are made of glass, about a quarter of an inch thick and about an inch tall.
In a device that has a detection volume of , the line-scan mirrors can be from about twenty centimeters (eight inches) wide to about seventy-six centimeters (thirty inches) wide, depending on what portion of the volume of detection that scan is responsible.
The line-scanning mirror allows the side, top, and back views of the line-scan cameras' visual fields to be folded, while maintaining side and top walls.
"relatively narrow, about seventeen centimeters (seven inches) thick in one embodiment. Each line-scan camera produces line-scan data of reflected light from 20 items moving through the detection volume. In one embodiment, With the rated speed of all conveyor belts and imaging resolution, the line-scan cameras operate at approximately 3,200 lines per second, corresponding to exposure times of approximately 300 microseconds. per line scan, these short exposure times require reasonably bright lighting to render high contrast images. For reasonable energy and lighting efficiencies, a '15 light source 40 can be selected to provide intense illumination with low divergence , focused along each line-scan camera's optical perspective.
Figure 3C illustrates the right-hand camera optics. Right side camera optics include, but are not limited to, the lower right output power endline scan camera 80, the upper right output power end line scan camera 81, the the lower right input power end line 82, and the upper right input power end line scanning camera 83, which are each connected to the lower housing 26 of the system 25. The side camera optics right are shown in focus using scanning mirrors
" line.
In this embodiment, the first lower right output power end-line scan mirror 92 reflects light from the second lower right output power end-line scan mirror 93, which reflects light from the third lower right output power end-line scan mirror 93. lower right output feed end 94, which reflects light from the sensing volume conveyor belt 32. In addition, the first lower right input feed end line scanning mirror 95 reflects light from the second scanning mirror. lower right input power end line 96, which reflects duo from the third scan mirror per Line. bottom right inlet feed end 97, '15 which reflects light from the volume conveyor belt of . sensing 32, Moonlight falls on the sensing volume conveyor belt 32 from the light source 40 mounted in the upper housing 28. When the first item 20A and the second item 20B exit the output feed end of the input power 30, they enter the end of. feed into the sensing volume conveyor belt 32 and pass through the fields of view of the camera optics on the right, line scan data corresponding to the first item 20A and the second item 20B is generated.
The first item 20A, which carries the marking area 24A, and the second item 20B, which carries the marking area 24B, exit the detection volume as they are transported by the detection volume conveyor belt 32 and over the
' infeed end of outfeed conveyor belt 34. Multiple line scan cameras, each with their own perspective, capture multiple images of the first item 20A and second item 20B before they exit the volume of detection. The generated line scan data is used by the system 25 to recognize parameters for each item as discussed further below.
An upper sightline scanning camera 88 is mounted in the lower housing 26, as shown in Figure gives. In this figure, item 20 runs from left to right 30 along the belt. feed conveyor 30 through detection volume 240. A belt slot 36 is provided. 15 between the input feed conveyor belt : 30 and the sensing volume conveyor belt 32. The upper observation line scan camera light source 41 provides intense illumination of the belt slot 36 with low divergence, allowing the upper observation line scan camera 88 to produce a high contrast image.
The upper sight line scan camera 88 produces images from light traveling through the belt slot 36 and over the top sight line scan space 98. The light is generated by the camera illumination source. scan line 41 and is reflected off the item 20 as it travels from the inlet feed conveyor belt 30 through the
7 belt 36 and over sensing volume conveyor belt 32.
In addition to providing an image of item 20 for further analysis by the marker area reader, the upper sight line scan camera 88 provides unobstructed images of the bottom of item 20. Although analysis by the marker area reader may identify an area of marking on the bottom of item 20, the sizing sensor uses the clear images of the bottom of item 20 to help refine the item 20 measurements. of different heights (such as the first item 20A and the second item 20B shown in Figures 3A and 3C) can be placed one f 15 adjacent to the other on the infeed conveyor belt 30 without the item insulator handling the items from distinct heights as a single item that has a more complex geometry. As shown in Figure 4B, the optical components of the Upper Line Scan Camera, including the Upper Line Scan Camera Light Source 41, the Upper Line Scan Mirror 98, and the upper sightline 88, are located within the lower housing 26 of system 25. In the illustrated embodiment, the optical path of the upper sightline camera 88 is bent only once, away from the scanning line mirror. top observation 98. In other words, light reflected off item 20 as light crosses the belt slit
7 36 is reflected off the upper sightline mirror 98 to the upper sightline camera 88. As previously described, item 20 is positioned over belt slot 36 when Ss item 20 is transferred from Input feed conveyor belt 30 to sensing volume conveyor belt 32.
As will be seen, the upper observation camera is a darkfield detector. That is, in the absence of an object in the same measurement area, it will receive little or no reflected light, and the image will be dark. When an object is present in the measurement area, light reflected from the illumination source 41 will be reflected back to the camera. In contrast, the light curtain ' 15 described above is a brightfield detector. When - no object is present, the image is bright, while when an object is present, the image field is shadowed by the object, causing it to appear as a dark object in the detector.
Working in conjunction with each other, the two systems allow detection and measurement of objects that can be difficult to detect with either approach. For example, an object that is relatively dark and/or a poor reflector may be difficult for the upper observation camera to distinguish from the dark background field. Similarly, an object that is relatively transparent may not produce enough contrast to be detected by the light curtain. The inventors have determined that a good object singulation rate can be obtained when using the
- two sensors in combination with the laser stripe generator 119 described below.
As seen in Figure 5A, a transport location sensor includes, but is not limited to, an in-feed conveyor belt 30, a sensing volume conveyor belt 32, an out-feed conveyor belt 34 , and a physical transport location sensor 122. A weight sensor, also seen in Figure 5A, includes, but is not limited to, at least one load cell (175A-D in Figure 12), previously mentioned in the context of Figure 2B.
In one embodiment, the weight sensor includes four load cells.
The set of four load cells supports the sensing volume conveyor belt 32 and the associated mechanical structure thereof: (motor, cylinders, the belt, etc.). In some embodiments, the weight sensor also includes three object sensors, shown herein as an object sensor input feed conveyor belt 173A, a sensing volume input object sensor 173B, and an object sensor. detection volume output 173C.
In some embodiments, each object sensor is placed about two-tenths of an inch above the transport location sensor 122. In some embodiments, the object sensors are Jur Sources and photodetector pairs in which the optical path between the source of light and the photodetector is stopped in the presence of an object, such as item 20. Other object sensors are well known in the art, and may be used depending on the specific application contemplated.
. Item 20 is transported to the detection volume. to the. Conveyor ds belt 30 inlet feed conveyor location sensor tongo. In one embodiment, as the item 20 approaches the sensing volume, the object sensing input feed conveyor belt 173A detects that the item 20 is about to enter the sensing volume. Item 20 passes over belt slot 36 as it is transferred from infeed feed conveyor belt 30 to sensing volume conveyor belt 32, and sensing volume inlet object sensor 173B verifies that the item 20 est in the detection volume. Similarly, the sensing volume output object sensor 173C detects when the item 20 leaves the sensing volume and is transferred from the belt. sensing volume conveyor 32 to output feed conveyor belt 34. However, the existence and particular location of each object sensor varies depending on the specific application contemplated.
When, as in Figure SA, no item is located on the sensing volume conveyor belt 32, the load cells measure the total weight of the sensing volume conveyor belt 32. Then, as one or more items 20 are transferred to the sensing volume conveyor belt 32, the load cells measure the weight of the sensing volume conveyor belt 32 and the weight of the one or more items 20. Each load cell converts the force (weight) into a measurable electrical signal, which is read as a load cell voltage r. Since the electrical signal output from each load cell is on the order of millivolts, the load cell signals are amplified and digitized by load cell amplifiers (not shown).
As seen in Figure 5B, the weight sensor includes, but is not limited to, the object sensor array (173A, 173B, and 173C) and load cells. The sensing volume inlet object sensor 173B is located just inside the sensing volume upper housing 28 and above the belt slot (indicated in Figure 4A by reference numeral 36) between the inlet feed conveyor belt 30 and the sensing volume conveyor belt 32. Similarly, the sensing volume output object sensor 173C is located just inside the upper housing 28 of the . sensing volume and above the outgoing feed conveyor belt 34. The object sensing incoming feed conveyor belt 173A is located above the incoming feed conveyor belt 30 upstream of the detection volume. Although Figure 5B depicts the object sensing input feed conveyor belt 173A as being close to the sensing volume, the distance between the object sensing input feed conveyor belt 173A and the sensing volume may vary depending on the application. specific contemplated.
Figure 5B also shows that load cells 175A and 175C are located within the lower housing 26 of the detection volume. Load cells 175B and 175D (as depicted in Figure 12) are not visible in this
- seen as they are blocked by load cells 175A and 175C, respectively.
The load cells support the sensing volume conveyor belt 32 and associated mechanical parts thereof, enabling the load cell assembly to measure the weight of the sensing volume conveyor belt 32 and items thereon, if any. .
As seen in Figure 5B, physical transport location sensor 122, in the illustrated embodiment, a rotary encoder, is located proximate a load cell 175C.
The physical transport location sensor 122 is connected to the sensing volume conveyor belt 32 and to a digital counter on one of the system's processors.
As the 3rd sensing volume conveyor belt is rotated by the motor, the encoder wheel turns, allowing the conveyor sensor processor to record the movement of the 32 sensing volume conveyor belt. conveyor belt from an arbitrary starting location is defined as the conveyor system location.
The conveyor sensor processor generates the conveyor system location on the belt: from conveyor for each conveyor sensor pulse generated by the physical sensor gives conveyor locations 122, although as mentioned above, in practice multiple sensor pulses can together constitute a system count in order to provide appropriate ranges.
The signals from the physical transport location sensor 122 are also used to drive the line-scan cameras described herein to generate images.
” In one embodiment, the location of the conveyor system is at the coordinate along the path of the item, wherein the coordinate system along the path is established according to the virtual conveyor belt of sensing volume which is infinitely long. When system 25 receives the position of the object of item 20 from the object sensor input feed conveyor belt 173A, it generates the conveyor system location corresponding to the coordinate along the belt of item 20. As illustrated in the Figures 6A and 6B, an embodiment of the sizing sensor includes, but is not limited to, a laser stripe generator 119, at least one laser mirror (shown herein as a first laser mirror 99, a second laser mirror : 15 100 and a third laser mirror 101), an area camera 152, one or more area camera mirrors (shown herein as first area 48 camera mirror and second area 49 camera mirror), an overhead line-scan camera (shown with reference numeral 88 in Figures 4A and 4B), and at least one parameter processor (not shown) for processing the parameter values generated from the images. s from the area camera from the 159 area camera and line scan data from the top observation line scan camera. The 119 laser stripe generator projects a laser stripe upwards onto the first laser mirror
99. As will be seen, various types of optical elements can convert a laser beam into a stripe, including, for example, a cylindrical lens, a prism,
7 conical mirrors or other elements can be used. A laser stripe is reflected from the first laser mirror 99 to the second laser mirror 100 and onto the third laser mirror 101. The third laser mirror 101 projects a laser stripe downwards from the top of the laser volume. detection on the detection tunnel conveyor belt 32. In a particular embodiment, the laser stripe generator 119 uses a holographic optical element and a laser diode to generate a laser stripe. In one embodiment, the laser diode is an infrared laser diode, and the area camera 152 is a CCD camera configured to detect infrared radiation. In a particular embodiment, a low-pass filter or a band-pass filter is configured to preferably: allow infrared radiation to pass while attenuating an amount of visible light that is placed on the CCD.
Item 20 is conveyed through the system from left to right along the conveyor system in the direction of movement from the inlet feed conveyor belt 30 to the sensing volume conveyor belt 32 to the inlet feed conveyor belt 32. outlet 34. It is transferred from the inlet feed conveyor belt 30 to the sensing volume conveyor belt 32, which transports it through the sensing volume. Area camera 152 has a pyramid-shaped visual field that turns downward on detection tunnel conveyor belt 32 after being bent by the first area 48 camera mirror and the second area 49 camera mirror. field of view of area camera 152 is represented in the
7 Figures 6A and 6B as being doubled by the first and second area camera mirrors 48 and 49, the number of mirrors used to double the visual field of area camera 152 is merely exemplary, and may vary depending on the specific application contemplated. a laser striper is projected onto the sensing volume conveyor belt 32 within the visual field of the area camera 152. Item 20 is conveyed through the sensing volume on the sensing volume conveyor belt 32, passing through the point wherein a laser strip is projected onto the sensing volume conveyor belt 32 from above.
At that point, the area camera captures area camera images of the item 20 and the laser stripe reflecting off the item.
Y 15 In the embodiment illustrated in Figure 7A, the system . 25 includes a lower left observation line scan camera 89 and a lower right observation line scan camera 90. The field of view of the lower left observation line scan camera 89 is doubled by the mirrors. left side lower observation line scan camera (first left side lower observation line scan camera mirror 105, second left side lower observation line scan camera mirror 106, third scan camera mirror left side lower sightline 107 and fourth left side sightline scanning camera mirror 108) before being projected down onto the sensing volume conveyor belt 32 at an angle that captures the left side r top of item 20 and back side of item 20 as item 20 passes through the front side of the detection volume first to from input feed conveyor belt 30 to sensing volume conveyor belt 32 to output feed conveyor belt 34, as shown in illustrated embodiment A,
The field of view of the lower right sightline camera 90 is doubled by the mirrors of the lower right sightline scanning camera (first mirror of the lower right sightline scanning camera 123, second right side lower observation line scan camera mirror 124, third right side observation line scan camera mirror f 15 and fourth right observation line scan camera mirror 126) before being projected down onto the sensing volume conveyor belt 32 at an angle that captures images of the top side of item 20 and the front side of item 20 as item 20 passes through the front side of the volume of detection first.
The lower right observation light source 128 provides intense illumination of the detection volume conveyor belt 32 with low divergence, allowing the lower right observation line scan camera 90 to generate a high contrast image.
Similarly, the lower left observation light source (not shown in Figure 7A) provides intense illumination of the detection volume conveyor belt 32 with low divergence, allowing the lower observation line scan camera on the left side 89 generates a high contrast image.
As shown in Figure 7B, the visual field of the lower left-hand line-scan camera 89 is doubled first by the first mirror of the lower left-hand line-scan camera 105, then by the second mirror of the scan camera. lower left-hand sightline 106. It is then further folded by the third lower left-hand sightline-scanning camera mirror 107 and the lower left-hand fourth sightline-scanning camera mirror f 15 left side 108. The fourth mirror of the lower left sight line scan camera 108 projects the visual field of the lower left sight line scan camera 89 down onto the sensing volume conveyor belt 32. Item 20 is conveyed along infeed infeed conveyor belt 30 over detection volume conveyor belt. 32 that will transport item 20 through the sensing volume after it has completed its journey over the infeed conveyor belt 30. As item 20 is transported through the sensing volume, it is brought into the visual field. of the lower left observation line-scan camera 89, and the lower left observation line-scan camera 89 capture images in the form of the line-scan data from item 20. Similarly, the visual field of the lower sightline camera on the right is folded first by the first mirror of the lower sightline camera on the right, then by the second mirror of the lower sightline camera on the right.
It is then further bent by the third lower right-hand line-scan camera mirror 125 and the fourth lower right-hand line-scan camera mirror 196. lower right observation line 126 projects the visual field of the camera and 15 lower observation line scanning camera: right side down onto the detection volume conveyor belt 32. The measure due item 20 is conveyed through the detection volume, it is brought into the visual field of the lower right-hand line-scan camera, and the lower right-hand line-scan camera captures images, line-scan data, of the item.
Once item 20 has completed the journey over the sensing volume conveyor belt, it passes over the output feed conveyor belt 34. In some embodiments, some parameter sensors may continue to detect item 20 at as it travels on the outfeed conveyor belt 34.
s7 r Information/Data Flow
Figure 8 illustrates a data flow for use in one embodiment of a system 25, organized as it moves from top horizontal slices to bottom hocizontal slices of an asynchronous data-driven architecture of the system.
That is, in the modality, there can be no universal clock within the system, sensors & processors output the same results as soon as data is available, and data flows are, in general, unidirectional.
In one embodiment, information is transported between processes via TCP/IP network messages, and within processes via shared memory.
As will be discussed in more detail below, * 15 Figure 9 illustrates the same elements grouped in parallel, detecting sensors/processes, namely a transport location sensor 120, one or more marker area reader(s) 130, a sizing sensor 150, an item isolator 140, and a weight sensor 170, to emphasize that each physical sensor and associated parameter processor can operate autonomously from the other physical sensors and parameter processors.
Figure 8, on the other hand, is organized in such a way that data flows from the data source level to the parameter processor level, to the geoparameter compatibility level, to the final stage, product identification, which is the stage where items that were detected in the detection volume are identified as products or labeled as exceptions.
- Each level in the hierarchy of a modality will, in turn, be dealt with below.
Data Sources The first data source is a transport system location sensor 120, which typically comprises a physical transport system location sensor 122 and a transport sensor processor 127, as shown in Figure 9. In one embodiment , the physical conveyor system location sensor 122 is a rotary encoder attached to a belt cylinder. As shown in Figure 9, the initial detection data from the physical transport system location sensor 122 is a count increment, the transport sensor pulse Dld7 (each of which may represent more than one sensor pulse) , which is sent to a transport sensor processor 127. The transport sensor processor 127 performs a simple sum and scaling process to convert DI47 transport sensor pulses into DI48 transport system location values. The transport system location values are distributed to each of the other parameter processors such that the parameter processors can associate a transport system location with each measured parameter value. In some embodiments, the transport sensor processor 127 also uses the transport sensor pulses D147 to generate DI42 line scan camera trigger signals and DI151 area camera trigger signals to the various line scan cameras 132 and a 152 area camera respectively. By triggering r cameras based on transport system motion rather than at fixed time intervals, the system can avoid repeatedly recording images of the same field.
The second data source illustrated in Figure 8 is S to Area Camera |52, Bird Camera 152 is positioned to observe the path of a laser light line projected down to the detection volume conveyor belt and any items in the same. As previously described, there is a known angle between the laser projector and the camera of areas that will cause the image of the laser light line on the camera to be shifted perpendicular to the line in proportion to the height of the item where the line is. designed. Data from area camera 152 is sent to insulation parameter processor r 15 of item 144 and sizing estimator 154.
The third data source illustrated in the system illustrated in Figure 8 is a line-scan camera array 132. The primary function of the line-scan cameras 132 is to provide input to the line-scanning parameter processor(s). marking 134. In one embodiment, there are eleven scanning cameras per line 132, which have been determined by the inventors to provide full coverage of the detection volume, with adequate imaging resolution. Other modalities may be deployed with fewer or more numbers of scan cameras per line, depending on the designer's performance goals, detection volume size and shape, camera resolution, and other factors.
The fourth illustrated data source is a moving scale 172 comprising, in one embodiment, three object sensors 173A, 173B, and 173C (shown at least in Figure 5B) and four analog load cells 175A, 175B, 175C, and 175D (shown at least in Figure 12). Load cells are arranged in the load path to support the sensing volume conveyor belt. Each load cell generates an electrical signal in proportion to the compressive force applied to the load cell. Signals from all load cells and all object sensors are sent to weight generator 174.
The data sources described above are included in a particular embodiment and should not be construed as exhaustive. Other data sources can easily be included in such a system, depending on the parameters to be monitored. For example, * 15 infrared sensors can provide measurements of . item temperature or color imagers can be used as data sources to measure a spatial distribution of colors on packaging labels. Parameter Processors Returning to Figure 8, the second. stage of the dataflow architecture contains the parameter processors. Each data source has one or more associated parameter processor(s) to transform the initial detection data into a parameter value, which is then used by an item identification processor to identify the item. In one embodiment, these parameter processors comprise an item isolation parameter processor 144, a sizing estimator 154, a marker area parameter processor 134, and a weight generator 174. In Figure 8, a
-” optional image processor 183 is represented as a parameter processor.
The first processor shown in Figure 8 is the 144 item isolation parameter processor.
—Functionally, the item isolation parameter processor 144 includes an item distinguishing system, an item locator, and an item indexer. The item isolation parameter processor 144 allows the system to operate on multiple items in close proximity to each other in the detection volume. The item isolation parameter processor 144, in some embodiments, uses the data collected near the input for the detection volume and performs four functions: A. First, the item isolation parameter processor 144 recognizes that an object (which can : be one or more items) has entered the detection volume; B. second, the item distinction system determines how many distinct items make up the object that entered the detection volume; The third, 5 item indexer assigns a Unique Item Index (UII) value to each distinct item. The UII is simply a convenient name for the particular item; and D. fourth, the item locator associates a two-dimensional location in the background plane of the detection volume (eg, the conveyor belt plane) with each item that has been identified and assigned a UII.
If all items entering the detection volume are well separated in the direction along the SO transport (i.e., they are singulated), there may be no need for the item isolation parameter processor 144, since all values of parameter will be associated with the only item in the detection volume. When items are not singulated, however, Item Isolation Parameter Processor 144 determines how many items are in close proximity to each other and assigns each item an ITU associated with its transport system location.
The iteml44 isolation parameter processor issues a UII and D148 transport system location when it isolates an item. The single item index (UII) value, as its name suggests, can simply be a sequentially generated index number useful for tracking the item. This data is provided to the : 15 sizing estimator 154 and a . item description 200.
Although item isolation may be a separate logical function in the system, the computer processing mode of item isolation parameter processor 144, in particular, modalities that may work in close conjunction with the sizing estimator 154, wherein the Internal data is transferred back and forth between functions. The item isolation parameter processor 144 in this approach works as part of the sizing estimator 154 processing to recognize the difference between a larger item and an aggregation of multiple small items very close together, and to instruct the sizing estimator 154 to estimate the dimensions of one or more than one item, respectively.
- The sizing estimator 154 receives data from the area camera 152, from a selected line scan camera 132 (the Superior observation camera in one embodiment) and from the transport sensor processor, which includes the location sensor system 120. Also, working in conjunction with the item isolation parameter processor 144, the sizing estimator 154 receives information about how many items are in the field of view of the area camera and where they are. It should be understood that while isolation and scaling may be logically distinct functions, they can share multiple processing operations and intermediate results and need not be entirely separate computer processes.
: 15 In one embodiment, the sizing estimator 154 estimates the length, height, and width of the item's dimensions, ignoring the fact that the item may have a complex (non-rectangular) shape. That is, in this approach, the estimator 154 calculates the smallest rectangular box in which the item would fit. The sizing estimator 154 can be configured to estimate parameter values with respect to the item's general shape (cylindrical, rectangular solid, bottle-necked shape, etc.), the item's orientation in the transport system, and details with respect to three-dimensional coordinates of the item in the detection volume. The calculated parameter values, along with the shipping system location of the item to which they apply, are sent to item description compiler 200 as soon as they are calculated.
T There is a marker area parameter processor 134 associated with each line scan camera
132. Together they form a marking area reader 130, as shown in more detail in Figure
10. As will be seen, the marker area parameter processors can be individual devices or they can be virtual processors, eg respective modules that run on a common processor. The marking area parameter processor 134 examines the continuous stripe image produced by the line-scan camera 132 until it identifies the signature of a marking area (typically a bar code such as a UPC). In addition, the marking areas parameter processor 134 attempts to convert the image of the marking areas: 15 into the underlying code, which can later be compared by the item description processor with the product description database to determine a code. that uniquely identifies the product. In addition to outputting the product code to the item description compiler 200, the marking areas parameter processor 134 outputs the apparent location of the marking areas in camera centric coordinates. As will be seen, additional methods are available to determine a markup area parameter. For example, many barcodes include numeric marking areas in addition to the encoded numbers that make up the code. In this regard, optical character recognition (OCR) or a similar approach can be used to recognize the numbers themselves, rather than decoding the slashes. In the case where the marking areas are not barcodes but rather written identification information, again OCR can be employed to capture the code. In principle, OCR or other word recognition processes can be used to read titles or product names directly as well.
Where, as with barcodes, there are a limited number of possible characters and a limited number of fonts that are expected to be found, simplifying assumptions can be made to assist OCR processes and allow for a character matching process. A library can be built, incorporating each of the potential characters or symbols, and instead of analyzing the detailed piece of Formaro's character read, the format can be compared with the library's members: to determine better compatibility.
Also, because in a typical environment there are fewer likely matches than there are possible matches, it is possible that partially readable code can be checked against likely code to narrow down the options or even uniquely identify the code. By way of example, for a retail stockpile of tens of thousands of items, each with a ten-digit UPC, there are 10** possible combinations, but only 10th combinations that actually correspond to products in the retail system. In this case, for any given partially read code, there may be only one or a few matches for real matches. By comparing the partial code with a code library actually in use, the system can eliminate the need to throw an exception, or it can
- offering an operator a small number of choices that can be evaluated, which can be ranked in order of probability based on other parameters or other available information. Alternatively, partial compatibility information can be passed as a parameter to the product identification module and evaluated along with other information to determine correct compatibility. In one embodiment, more than one barcode reader software module may be employed using different processing algorithms to process the same scanned data, and the results from each module may be compared or otherwise integrated. to arrive at a reading agreement, or on a more likely reading where there is no agreement.
: 15 For weight parameters, the moving scale 172 generates a signal proportional to the sum of the weights of the items on the scale. For single items, where only one item is in the active sensing volume at a time, the weight generator 174 can sum the signals from the moving scale 172, the load cells in the illustrated embodiment, and apply a transformation to convert voltage to Weight. For non-singulated items, where more than one item can be in the sensing volume simultaneously (i.e., closely spaced along the sensing volume conveyor belt), the weight generator 174 has two opportunities to estimate the weight of individual items: immediately after the item enters the detection volume, and immediately after the item leaves the detection volume. Moving scale object sensors 172 are provided to inform the weight generator 174 when items have moved in or out of scale.
.- in motion 172. The object sensors are incorporated in the moving scale 172, so the operation of the same can be probed independently of other parameter sensors.
As with data sources, this list of parameter processors listed above is exemplary, not an exhaustive listing. For example, Figure 8 includes AN optional 18% image processor. Furthermore; It should be noted that any of the parameter processors described in this document can be issued in particular modes. For example, where the size, shape and parameters of marking areas are sufficient to identify objects in the detection volume, there may be no need to include weight parameters.
r 15 Geometric Parameter Compatibility Geometric parameter compatibility is the process of using the known geometry of the various physical sensors and visual fields in which they have collected initial detection data to match the measured parameter values with the item to which the parameter values apply. Item Description Compiler 200 is the processor that collects all asynchronous parameter data and associates it with the appropriate item.
As the name suggests, the item description compiler output 200 can be called the item description associated with the item. The item description is a compilation of parameter values collected by parameter processors for an item measured in the detection volume.
After the item description compiler 200 has constructed an item description for a particular item, the
The item description may be passed to an item identification processor 300, which performs the product identification function.
In practice, although there may be multiple item description fields available, it is possible to identify items without completing each field of the item description.
For example, if a weight measurement was too noisy, or the marking area was hidden from view, blurred, or otherwise unreadable, the item description may still be sent to the item identification processor 300 instead of being stuck at the geometric parameter compatibility level in item description compiler 200. Item description compiler 200 might decide, for example, that having only digital markup area data is enough data to pass to . 15 ds item identification processor 300, or bode determines that the item has left the detection volume and no further parameter values will come from the parameter processors.
product identification
By way of example, the item identification processor 300 may receive an item description from the item description compiler 200. Using the parameter value data in the item description, the item identification processor forms a query a product description database, which in turn returns a product ID and a list of expected parameter values for that product, along with any ancillary data (such as standard deviations in those parameter values).
- Item identification processor 300 decides whether the item is compatible with the product with a sufficiently high degree of certainty. If the answer is yes, the product identification data D233 is issued; if the answer is no, the item can be identified with an exception tag D232. The identification/exception decision logic can range from simple to complex in various modalities. At the simple end of the logic scale, the item identification processor can 420 tag any item for Q for which the weight did not match the weight of the product described by the UPC. At the complex end of the logic scale, the item identification processor can incorporate fuzzy logic, which is a form of non-Boolean algebra that employs a range r 15 of values between true and false that is used in decision making with data. imprecise, as in artificial intelligence systems.
Optionally, various 320 exception handling routines can be invoked. These routines can be rudimentary like doing nothing or turning on a light for a human to observe, or they can be more complex. For example, the item identification processor 300 can be instructed to act as if the marking area read is in error by one or more digits and refer back to the product description database with variations in the marking area read.
Optionally, each successful product identification can be used to update the product description database. That is, each successful identification increases statistical knowledge of how a
- product appears to be for system 25. Also optionally, information regarding D232 exception tags can also be added to historical database 350 for system improvement 25. Asynchronous Information Flow and Processing System Figure 9 illustrates a modality of a data stream for the same elements as shown in Figure 8, with a slightly different optimal arrangement and grouping.
The data sources illustrated are a transport location sensor 120, one or more marking area readers 130, a dimension sensor 150, an item isolator 140, and a weight sensor 170, to emphasize that each physical sensor and processor associated parameter operates autonomously from the other physical sensors and parameter processors.
The transport location sensor system 120, in some embodiments, includes the physical transport system location sensor 122 and a transport sensor processor 127. In some embodiments, such as that shown in Figure 9, The physical location sensor conveyor 122 takes the form of a rotary encoder associated with a belt roller.
The initial detection data from the physical transport system location sensor 122 is a count increment, the transport sensor pulse D147, which is sent to the transport sensor processor 127. The transport sensor processor 127 then , performs a scaling and summing process to convert the D147 transport sensor pulse into D148 transport system location values.
As described above, the system can treat the conveyor belt as being essentially continuous and the location of the conveyor system is essentially the distance along the (continuous) conveyor belt from some arbitrary starting point.
In a particular embodiment, this distance is measured in increments of about five thousandths of an inch and may be referred to as an x coordinate. In one embodiment, the transport sensor processor 127 also uses the transport sensor pulses D147 to generate the DI42 line scan trigger signals and DI151 area camera trigger signals for the various line scan cameras. and an area camera respectively. By triggering the cameras based on the movement of the transport system, rather than at fixed time intervals, the system 25 can avoid repeatedly recording images of the same field. Thus, the output of the transport sensor processor 127 includes the D142 line scan trigger, the DI5S1 area camera trigger, and the DI148 transport system location.
In addition to a set of conventional dedicated motor controllers, conveyor sensor processing includes converting D50 input belt commands (e.g. stop, start, speed) received from the weight sensor 170, into motor controller signals. ; convert DI47 transport system sensor pulses into DI48 transport sensor location values; and transmitting that value to the various parameter processors, including without limitation the item isolation parameter processor 144, the dimension estimator 154, the parameter processor marker areas 134, the weight generator 174 and, optionally, the processor of image 183, wherein each parameter process may be as illustrated and described in relation to Figure 8, above.
It will be noted that the transport sensor processor 127 can communicate directly with the various cameras to send frame triggers to the same.
The D148 transport system location output from the transport location sensor system 120 is provided to the item isolator 140, dimension sensor 150, marking area reader 130, weight sensor 170, any image processors options 183 : 15 (shown in Figure 8) and the item description compiler > 200.
A set of one or more line-scan cameras, which is included in the marker area reader 130, is driven by the D1l42 line-scan trigger.
As shown in Figure 9, The DI42 line scan trigger triggers the line scan cameras to produce line scan data that initiates activity within the item isolator 140, dimension sensor 150 and marking area reader 130. The activity initiated by the line scan trigger D142 will be fully described below in the descriptions of Figure 10, which describes the marking area reader 130, Sa Figure 11, which describes the Stem isolator 140 BR and the 150 dimension sensor. Similarly, the DI51 area camera trigger can trigger activity on the area camera
- which outputs area camera data to item isolator 140 and dimension sensor 150, which is described in more detail in accordance with Figure 11. In one embodiment, there is a marking area reader 130 associated with each line scan camera, which can be a virtual marker area reader. The marking area reader 130 examines the continuous strip image produced by its line-scan camera until it identifies the signature of a predetermined marking area (typically a bar code such as a UPC) at the time it decodes the image of marking areas in a digital marking areas value D159. Additionally, the marking area reader 130 outputs the apparent location D236 of the marking areas in : 15 coordinates centered per camera. DI59 digital marking area data, item location in the system of | transport DI48 and location of marking areas in camera-centered coordinates D236 are transferred from marking areas reader 130 to item description compiler 200.
In some embodiments, tag area reader 130 may occasionally receive image retrieval requests D149 from item description compiler 200, whereby tag area reader 130 extracts an image subframe D234 containing the areas continuous strip image marking. The images extracted from the identified marking areas are transferred to a historical database 350. The historical database 350 is an optional element of the
- system that can be used for post analysis and image recovery is similarly optional.
Note that each of the line scan cameras can detect the marking areas at different times, even for a single item.
For example, items situated on the sensing volume conveyor belt with a marking area pointing up are likely to have at least two line scan cameras that record the image of the marking area (e.g. lower observation on the left and on the right), possibly at different times.
These two UPC images will be processed as each data arrives at its respective tag area reader, with the two UPC values and associated camera-centered coordinates 15 being sent to the item description compiler 200 asynchronously.
Returning to Figure 9, item isolator 140 receives line-scan camera trigger D142 and transport system location DI48 from transport location sensor system 120. Item isolator 140 outputs an indicator value of single item (ITU) D231 with the associated item's DIS6 transport system location to the item description compiler 200 only when it has isolated an item.
The UII value is provided internally to the dimension estimator 154 (shown in Figures 8 and 11) and externally to the item description compiler 200 as soon as they are available.
Despite a separate logic function in the system, SO the item 140 isolator computer processing in the
"System modalities can work in conjunction with the 150-dimension sensor and/or the light curtain assembly. Essentially, the isolator of item A) assists the 154-dimension estimator processing (shown in Figures 8 and 11) to recognize the difference between a large item and more than one item positioned close together in the detection volume and B) instructs the dimension estimator 154 to estimate the dimensions of one or more than one item respectively.
The 150 dimension sensor receives the DI51 area camera trigger, and the DI48 transport system location from the 120 transport location sensor system. The area camera, which is part of the 150 dimension sensor, upon receipt from the DI51 area camera trigger, generates area camera image data : 15 and supplies the area camera image data to the dimension estimator 154. Additionally, working in conjunction with the item isolator 140, the dimension sensor 150 collects information about the amount of items in the area camera's field of view and where the items are. The dimension sensor 150, specifically the dimension estimator, combines multiple frames from the area camera 152 to estimate the location of points that form the surfaces of each item using a triangulation process. The dimension sensor 150, including the dimension estimator processing is described in more detail in accordance with Figure 11.
The 150 dimension sensor further transforms the estimated item surfaces to determine a bounding box for each individual item. That is, calculates a smaller rectangular volume than
- aria contain each item. In one embodiment, the width, height, and width of this bounding box are considered to be the dimensions of the item, ignoring any non-rectangular aspects of its shape. Similarly, a more complex bounding box S can be calculated by treating the respective portions of the item as bounded by the respective bounding boxes. In this approach, each object is rendered as an aggregation of parameters representing the box structures, but the overall shape of the item is somehow preserved. Collateral parameters such as item orientation and three-dimensional coordinates on the sensing volume conveyor belt are also calculated in one embodiment. Furthermore, the dimension sensor 150 can, at the user's discretion, estimate the parameter values in relation to the general shape of the item (cylindrical, rectangular solid, bottle-necked shape, etc.) by calculating image moments of higher order. These parameter values along with the shipping system location of the item to which they apply are the D166 sizing data passed to item description compiler 200. As an optional step, dimension sensor 150 outputs some intermediate data , such as closed height profiles D247, for the history database 350.
In one embodiment a disambiguation functionality can be included that provides additional approaches to handling items next to each other that are identified by the system as a single object. In this regard, for each object profiled by the dimension sensor, in addition to providing a master profile for s7 . each item, multiple subordinate height profiles can be generated. Subordinate profiles can be generated, for example, by performing a bubble detection operation on the master profile to determine if subordinate regions exist. Where subordinate profiles are detected, both master and subordinate profiles can be published with the item description for use by other subsystems. If no subordinate profiles are detected, only the master profile is published.
For cases where child profiles are detected and multiple markup areas are read for the object that has child profiles, a disambiguation process based on the child profiles can be performed. In this process, subordinate profiles are: 15 used in conjunction with a limited universe of potential item IDs. In particular, only those item IDs that correspond to the markup areas read into the object are used. Since the universe of potential compatibilities is thus limited, compatibility can proceed in accordance with the approaches described in relation to the various embodiments described in the present invention. If the result of this matching process yields subordinate items that are all uniquely identifiable, the subordinate items are published in lieu of multireading and the master item is discarded. If single reads are not obtained, the multi-read object can be published for further analysis by the system as is.
Weight sensor 170 is the last sensor shown in Figure 9. As discussed earlier, a
- embodiment of the weight sensor 170 includes the moving scale 172 and the weight generator 174 (shown in Figure 8), which sums the moving scale signals and applies a transformation to convert the voltage into weight data.
For non-individualized items, where more than one item can be in the sensing volume simultaneously (i.e., close together along the sensing volume conveyor belt), the weight sensor 170 has two opportunities to estimate the weight of the items. individual: immediately after the item enters the detection volume and immediately after leaving the detection volume.
The moving scale object sensors provide the weight sensor 170 with information as to when items have entered or left the moving scale, which is used by the weight generator. 15 to determine the DI91 weight data that corresponds to individual items when multiple items are located in the | sensing volume conveyor belt at the same time.
When multiple items overlap as they enter or leave the detection volume, the weight sensor produces an aggregate weight for the overlapping items.
The weight sensor 170 transfers the weight data D191, which is the item weight and the item's location on the conveyor system, to the item description compiler 200. Optionally, the continuous stream of weight data 191 is sent to the history database 350 in Step DI90. The 170 weight sensor also delivers D50 belt control commands to the motor conveyor system controllers,
as will be described below.
As indicated in the descriptions of Figures 8 and 9, in one embodiment, the item description compiler 200
” receives data from all the various parameter sensors. Item Description Compiler 200 drives geometric parameter matching, which is the process of using the known geometry of the various physical sensors and their visual fields to match the measured parameter values with the item that was in their visual field(s). ) time(s) when measurements were taken.
An item description (the output of the item description compiler 200) is compiled by matching the measured parameter value to the time known to be in the visual field of the particular sensor. As described above, where each sense visual field is known, for example in relation to a fixed reference point in the transport system, it is possible to associate an item detection occurrence with a particular location. From time to time, it may be useful to calibrate the system by imaging an item that has known geometry and/or marking areas, for example, an open box of a known size and which has marking areas located in known locations of the same. .
As an example, a lower direct observation line-scan camera on the belt may have a visual field described as a straight line through the detection volume conveyor belt, with the center of the line at the center of the detection volume conveyor belt on the belt. traverse dimension and 15.24 centimeters (six inches) downstream of a reference bridge defined for the item description compiler
200.
6th " In this example, the marker area reader 130 determines that UPC 10001101110 was read starting 200 pixels from the left end of the line-scan camera's visual field, at the time when the transport system location was at 52,070 centimeters ( 20,500 inches) from its launch point. Using known information regarding camera parameters 8 to the camera geometries relationship to the Sensing Volume Conveyor Belt, Item Description Compiler 200 can determine that the UPC has been observed 2.54 centimeters (1 inch) from the left of the sensing volume conveyor belt and at a conveyor system location of 52,055 centimeters (20,494 inches). UPC with the item (with an arbitrary UII, 2541 as an example) that was observed to be closest to the transportation system location a
52,055 centimeters (20,494 inches). Similarly, when the weight sensor, specifically the weight generator, reports that DI91 weight data for an item has been loaded on the moving scale at transport system location 20,494, the item description compiler 200 associates this DI91 weight data to the UII of item 2541. The geoparameter compatibility process is generally more complex than this simple example and makes use of knowledge of the full three-dimensional sensing field of each physical sensor. In one embodiment, the complete three-dimensional geometry of all respective sensor detection fields may be compiled into a library for use by the item description compiler 200. The library is used by the description compiler 200 to associate items and perceived parameters. . Thus, in one embodiment, it is the complete three-dimensional location of each item (e.g., a set of transverse, longitudinal, and rotational coordinates of the item) combined with the height, width, and depth of the item that are used in compiling a description of an item. complete item of each item. Because two items cannot exist in the same physical space, the D148 transport system location 1 and the bounding box description of each item can be used by the item description compiler 200 to match the parameter values to the correct item. Item identification proceeds as described above in the sections labeled Compatibility: 15 Geometric and Product Identification. In the example of a retail environment, once the product is identified, the item identification processor 300 transfers the product identification data D233 to a point of sale (POS) system 400. Alternative uses for the system are contemplated different from direct logistics retail systems and processes. For example, the system could be employed in reverse logistics, where product IDs are sent to an auctioneer, distribution center, manufacturer, or other entity.
Maintenance Functions In one embodiment, a configuration and monitoring process tracks and updates system calibration data while continuously monitoring the activity of each software process. Each process can be configured to emit a regular heartbeat signal. if the
- heartbeat from a particular parameter processor or subsystem is not received after a period of time, the configuration and monitoring process may stop and restart that particular parameter processor.
In embodiments that employ an asynchronous dataflow architecture, stopping and restarting any process does not generally affect any—other—process or require resynchronization with a clock signal.
However, some items that pass through the system during reboot may not be identified, in which case they can be handled by normal exception procedures.
File Transfer Process
The file transfer process is . 15 responsible for moving generally large low-priority data files through the network of various parameter sensors to the history database 350, when this optional database is included.
The file transfer process manages the transfer of large files including, but not limited to, the line scan images produced as part of the marker area reader processing, the height profiles generated by the dimension estimator, and transducer data streams. of weight.
If file transfers occur indiscriminately, high-priority real-time data transfers such as streaming line scan data could be interrupted by low-priority data transfers.
The file transfer process manages these potential conflicts.
- In one embodiment, each real-time file transfer process, which is used for large low-priority (LLP) files/datasets, first stores the LLP data locally on the hard drive of the parameter processor where the datasets are created. On a regular basis, approximately every three hundred milliseconds, the file transfer process running on the one or more computers hosting this parameter processor checks the newly deposited LLP data and sends the data over the network to the historical database , which can be associated with the item ID processor for convenience. Data is transmitted in a metered manner, with limited packet size and packet-to-packet transmission delays applied, so the average network bandwidth is minimally reduced.
The configuration parameters for the file transfer process reside in a configuration database. Configuration information such as packet sizes, transmission delays and IP and destination server addresses are saved in the database. The file transfer process uses the standard file transfer protocol and is implemented in a modality using the open source cURL library.
Marking Area Reader 130 Figure 10 is an information flowchart for one embodiment of the marking area reader. In one embodiment of system 25, there are eleven scanning cameras per line and, as noted earlier, there is a
- tag areas (virtual) 130 logically associated with each line-scan camera, although all tag-area reader processing in practice may occur on the same physical processor. The marking area reader 130 performs three functions: identifying and encoding any captured marking areas, and optionally extracting the image marking areas from the continuous strip image collected by the line scanning camera 132. In this way, each area reader marking 130 in the embodiment effectively operates as a bar code encoder. In the embodiment, the eleven Marking Area Sensors together define a four-pi steradian marking area reading system. Each marking area reader 130 comprises a parameter processor programmed to identify the marking areas in the line scan data captured by each of the line scan cameras 132 and to interpret the marking areas in ds area data. digital markings As described earlier, each line scan camera 132 receives a line scan trigger DI42 based on the movement of the transport system. A line scan data is the output of a single field of a line scan camera array
131. Each line scan data DI81 collected by the line scan camera array 131 is transferred to a line scan camera buffer 133, which is internal to the line scan camera
132. Line scan camera staging 133 compiles line scan data DI181
- together in packets of two hundred scan data per line, which may be referred to as D237 image tracks. In one embodiment, the nominal imaging resolution in the item for each 4d line scan camera. 096 pixel 132 is approximately two hundred dpi. Thus, an image range of two hundred scan data per line corresponds to a visual field of approximately 2.54 centimeters by 50.8 centimeters (one inch by twenty inches). Each line scan camera can be configured to transfer individual image tracks from the camera to a circular acquisition staging 135 in parameter processor marker areas 134. It should be noted that D237 image tracks are used to transfer data between the camera from . 15 scan per line 132 and parameter processor marking areas 134 for communication effectiveness only; the | Data processing in the Parameter Processor 134 marking areas is performed on a line-by-line basis. Furthermore, it should be noted that the line scan camera staging 133 collects and saves the line scan data each time the transport system moves through the defined trigger increment, regardless of the presence of an item in the detection volume. .
As discussed above, each image track D237 is tagged with a transport system relevant location value DI48 where, generally, one location value is all that is needed for every 200 lane tracks. (The) "D2S7" "image" strips are "catenated into circular acquisition buffer 135 to reform their original continuous strip image format.
- Consequently, even if an item or a marking area in an item crosses multiple D237 image tracks, the item or marking area can be processed in its entirety after additional D237 image tracks are received by the circular acquisition staging 135. In one embodiment, circular acquisition buffer 135 is configured to contain 20,000 lines of camera data.
Marking area reader 130 extracts data from temporary storage 135 and examines the line scan data DI81 line by line, in a signature analysis process 136, both in the “cross & range” direction (within each line) = how much in the “are along the strip” (one line to the next), to find - 15 signature features of a predetermined marking areas format.
For example, barcodes ! UPC can be recognized by their high-low-high intensity transitions.
During signature analysis 136, the identified marking areas are extracted from the e-line scan data. at. areas. of extracted markings DI58 transferred to an encoding logic process 137. The encoding logic process 137 converts image data into a value of machine readable digital marking areas D159. OMNIPLANARO software (trademark of Metrologic Instruments, Inc.; acquired by Honeywell in 2008) is an example of software suitable for performing marking area coding and identification in the marking area reader.
As will be appreciated, multiple serial or parallel logic processes can be employed to
. allow redundant identification. In this regard, where a first approach to identifying and encoding a code is not successful, a second approach may prove useful.
In one embodiment, items are generally marked with marking areas where the marking areas conform to various predetermined patterns. Examples of marking areas that are capable of being read by the coding logic process 137 include, but are not limited to, the following: EAN-B, EAN-13, UPC-A, and UPC-E one-dimensional bar codes that capture 8, 12 and 13 digit Global Business ATM Numbers (GTIN).
It will be understood that the marking area reader 130 may operate continuously on the scan data per line 15. In the context of a barcode reader, when a high-low pattern is observed in the line scan the software attempts to identify how The same is a tag area. If identified as such, the software then encodes the entire tag area into a digital tag area value. In particular embodiments, the line scan data presented to the coding logic process 137 is monochrome, so the coding logic process 137 relies on lighting and other aspects of the optical configuration in the line scan data to present the information with sufficient resolution and contrast to allow coding marking areas to be printed according to UPC standards /EAN.
The output of the coding logic process 137 contains three data: the value of digital marking areas DI159, the system location of Eransporte DI48
- corresponding to the one or more scan data per line where marking areas were identified and the location of marking areas in D236 camera centered coordinates. In this regard, camera-centered coordinates could describe a dimensional area occupied by the entire marking area. Alternately, a particular X-Y location, for example, a centroid of the markup area image, a particular corner or an edge, could be assigned to that markup area.
In addition to identifying and composing the markup areas, a second optional function of the markup area reader 130 is to extract the images of individual items as requested by the item description compiler - 15 200 and to transfer these images, the extracted image subframes D234, for history database '350. Item description compiler 200 issues a D234 image retrieval request, along with the transport system location that describes where the item using the tag areas was located in the line scan camera visual field 132, causing a region extract process 138 to send image retrieval request DI49 to retrieve the appropriate subframe D234 from circular acquisition buffer 135. The region extract process 138 then performs JPEG compression of the extracted D234 subframe and transmits it through the file transfer process to the history database oric 350.
. Item Isolator 140 and Dimension Sensor 150 Turning to Figure 11, a flowchart of information of an embodiment of the dimension sensor 150 and the item isolator 140 is provided. Dimension sensor 150 functions primarily for item sizing or measuring the spatial extent of individual items, while item isolator 140 functions primarily for item isolation or sorting or distinguishing items entering the detection volume. For example, if two boxes enter the sensing volume in close proximity, item isolator 140 informs the rest of the system that there are two items to identify & dimension sensor 150 informs the system of the size, orientation and location of each of the two items. As has been noted, these . 15 two processes operate in close coordination even though they are performing distinctly different functions. : Since the sizing process actually begins before the item is completely identified by the 140-item isolator, the 150-dimension sensor will be addressed before the 140-item isolator. In one embodiment, both the 150-dimension sensor and the isolator of item 140 use the output of one of the line-scan cameras 132A and the area camera 152.
Dimension sensor 150 In one embodiment, dimension sensor 150 includes area camera 152 and upper sight line scan camera 132A. Dimension estimator 154 (the parameter processor portion of dimension sensor 150) receives data from area camera 152, upper sight line scan camera 132A, and sensor
- of NorcalizaçaoSde System of "transport N20 > (Shown in Figure 8).
The main function of the 150 dimension sensor is item sizing. During the height profile cross-section extraction process 163 and the aggregation process 153, the dimension sensor 150 combines multiple frames from the area camera 152 to estimate the location of points that form the surfaces of each item with using a triangulation process. As deployed in one embodiment, a laser line generator continuously projects a line of light onto the sensing volume conveyor belt (and any item on it). The line is projected from above and runs substantially perpendicular to the direction along the strip. 15 of the belt. In operation, the line of light will run up and over any item on the belt that passes through your field of vision. Triggered by the DI51 area camera trigger, the 152 area camera records an image of the beamline. There is a known fixed angle between the laser line generator axis and the area camera optical axis so the image of the light line in the area camera 152 will be shifted perpendicular to the length of the line by an amount proportional to the height. the laser line above the reference surface, which can be defined as the top surface of the conveyor belt. That is, each frame of the 152 area camera is a line of light, apparently running from one edge of the belt to the other, with vibrations and lateral steps, the vibrations and steps indicative of a single height profile of items on the belt.
- Triggered by the DI51 area camera trigger, the area camera 152 provides an area camera image data (a single image) each view (which (to the transport detection volume conveyor belt moves through the selected counting interval) In some embodiments, the contrast of this height profile can be enhanced through the use of an infrared laser and a bandpass filter selected to preferentially pass infrared light positioned in front of the 152 Area Camera. Area camera output 152 is area camera image data D46, which contains two-dimensional images that show only the displacement of the laser strip as it passes over the item.
Area camera 152 takes snapshots of -15 laser strip which is projected through the detection volume conveyor belt (edge to edge) by the generator strip of ! laser. D46 area camera image data and DI48 transport system location value when D46 area camera image data was recorded are distributed to item isolation parameter processor 144 and dimension estimator 154, which operate in close coordination.
A height profile cross-section extraction process 153 extracts a height profile cross-section D257 from area D46 camera image data by determining the lateral displacement of the daser strip, which was projected by the strip generator. of leisure on the item. When there is an angle between the laser strip projection direction and the viewing angle of area camera 152, the strip image is shifted.
- laterally whenever the strip is intercepted by a non-zero height item. The dimension estimator 154 uses a triangulation algorithm to calculate 10. cross-section of the item's height profile D257 along the original (non-displaced) path of this linear strip. Note that the D257 height profile cross section is a tuck map of everything on the belt at the locations under the laser strip.
The D257 height pertile transvereal sm is represented by a collection of height data points which are referred to in the present invention as hixels. Each hixel represents the height (z) of a point on a position grid (x,y). As shown in Figure 3A, the Yy coordinate represents the position across the belt, the x coordinate - 15 represents the position along the belt, and the z coordinate represents the height. The cut extraction process in ! height profile cross 153 is applied to each frame of area camera 152, the camera being triggered each time the transport system moves a predetermined distance about 0.013 centimeters (0.005 inches) in one embodiment.
The resulting sequence of height profile cross sections is combined into groups by a 155 aggregation process to construct closed height profiles D247. The aggregation process 155 is based on a predefined minimum association distance. If the distance between any of the pixels is less than this association distance, they are considered to belong to the same group. A closed height profile D247 is created once there are no more pixels arriving from the extraction process
- of profile cross-section of height 193 which can plausibly be associated with the group.
In other words, a closed height profile D247 comprises all non-zero height points on the belt that would plausibly form part of a single item.
It should be noted that a D247 closed height profile may actually comprise two or more approximate items.
Each D247 closed height profile is compared to predetermined length and width dimensions to ensure that it represents an actual item and not just some noise generated pixels.
When available, D247 closed height profiles are sent to the dimension parameter estimation process 157 and the dimension consolidation process 145. The closed height profiles : 15 D247 are optionally sent to the history database 350. In one embodiment, the height profile can be Smoothed to account for sensor noise.
In this approach, once a height profile is assembled for a single object, portions of the profile that appear to be detached can be removed.
As will be seen, removing apparent highlights prior to profile assembly would eliminate portions of an actual object that are separated by a discontinuity, for example, a cup handle may appear as a separate object from the cup body in a viewing plane. particular.
However, once the profile is assembled, this type of discontinuity would tend to be resolved, allowing smoothing to be performed without destroying information about discontinuous object regions.
- It may also be useful to include a belt floor determination or zeroing function for the height profiling system.
During common use, the belt will continuously pass through the laser strip projection, and the system should measure zero object height.
In theory, the belt floor can be measured using an average operating height measurement, which measurement can be used as a dynamic threshold that is subtracted from or added to the measured height of objects passing along the conveyor.
In practice, it can be difficult to distinguish an empty belt from a belt carrying short items, which could deviate from the zero measurement if treated as an empty belt.
One solution is to use a predetermined height limit and for the system to handle something smaller than . 15 boundary height is an empty belt.
Even if a real object passes through the system, its effects will be smoothed out as a result of the weighing in operation.
This can allow for the removal of slowly varying portions of the signal while allowing for the removal of high frequency information.
The second data source for the 150 dimension sensor is a selected 132A line scan camera (where the suffix “A” indicates the selected camera), where the selected camera is, in this example, specifically the scan camera array of Upper observation line 131A.
Camera 132A produces line scan data after receiving DI42 line scan trigger signals. The line scan data is then sent to a line scan camera buffer
- 133A as described above for the symbol reader
130. As already mentioned, many of the data processing functions are used for item sizing and isolation. In this way, the line scan camera buffer 133A sends image beams to the circular acquisition buffer 135A, which is illustrated in Figure 11 as being arranged in the item isolator parameter processor 144. Furthermore, as per a versed element in the art would recognize, the various data processing steps illustrated in the present invention are grouped as belonging to a particular processor (e.g., item isolation parameter processor, size estimator, etc.) only a . 15 title of convenience of explanation and such grouping is not intended to indicate in which processing unit: physical such processing steps take place.
The top observation line scanning camera is arranged to observe the bottom of items on the sensing volume conveyor belt. This camera is aligned with the image through the small gap between the inlet conveyor belt and a sensing volume conveyor belt. Unlike other line scan cameras, the superior line scan camera does not need a large depth of focus because it is generally looking at a consistent plane. That is, the bottom of each item tends to be approximately in the plane of the sensing volume conveyor belt. In general, each line sweep comprises a few dark pixels
- (where no items are above the gap) and some bright pixels (where part of an item is over the gap). Silhouette generator 141 at item isolator parameter processor 144 processes the line scan data DI81 received from circular acquisition buffer 135A line by line and determines whether the intensity of any of the pixels exceeds a predetermined threshold. Pixels that exceed the threshold are set to the high binary level while those below the threshold are set to the low binary, flavor, O. Any line that contains at least one high value is called a D242 silhouette. (A row without a high value is a null silhouette.) It will be understood that any silhouette can contain information about multiple items. The silhouette D242 produced by the silhouette generator 141 is sent to a sketch generator 143, which is the logical process for building background sketches.
In conjunction with the top observation line scan camera, the light curtain mount also observes span 36 and objects passing over it. As described above, pair scans of LEDs and photodiodes detect shaded portions of the scanned line. Due to the fact that the light curtain is a glow field detector, its silhouettes do not correspond to the bright pixels, as in the upper observation line scan camera, but rather to the dark pixels. For many objects, both detectors will mark the same silhouette positions. However, for certain objects, one of the two detectors may fail to observe the item. For example, the light curtain can fail when a transparent object passes through its visual field, while the camera can fail when confronted with an object that is a deficient reflector. In one embodiment, the two silhouettes may be subjected to a Boolean OR operation so that if one or both detectors identify an object, the object is noticed by the system. Alternatively, the two systems can operate independently, and produce their own set of parameters for evaluation by the system.
The sequence of silusts is combined. into clusters by an aggregation process similar to the cluster generation that occurs in sketch generator 143. Sketch generator 143 is based on a defined minimum association distance. If the distance between any two high pixels - 15 in the silhouette sequence is less than this association distance, they are considered to belong to the same : cluster. Thus, a cluster includes both pixels along a raster line and pixels on adjacent raster lines. The background sketch D244 of each pixel cluster is computed by taking slices along the x- (along the belt) and Yy- (across the belt) direction, and by finding the first and last transitions between pixels of grouping and background for each row and column. That is, if there are gaps between cluster pixels along a row or column, the processor skips these transitions, due to the fact that there are more pixels in the same cluster in addition to the length of the row or column. This background outline definition assumes that items are generally convex.
When this sketch extraction approach is used, the
. holes inside the items will be ignored. The D244 background sketch is used during the 145 dimension consolidation process. For a system that incorporates both a light curtain and a line scan camera, & there may be two DoJ background sketches or alternatively both sets of Acquired data can be used together to define a single D244 background sketch. For the purposes of the following discussion and associated Figures, the separate or joined outline is referred to as D244, and the singular is to be understood to encompass the plural.
The D244 background sketch is used in some embodiments to refine the dimension understanding of each item. For example, as described above, the laser strip viewed by area camera 152 is at an angle of . 15 in relation to the detection volume. Because of the angle, tall items can shadow adjacent short items. Information from the upper sight line scan camera can allow the item sizer and insulator to more reliably detect those shaded items and report their background sketches in x and y dimensions.
Before calculating the length, width and height of the smallest bounding box that encloses an item during the dimension parameter 257 estimation process, 6 closed height profile DI47 can be mathematically rotated (in the plane of the conveyor belt) to a default orientation during the 145 dimension consolidation process. In some embodiments, the closed height profile D247 is projected in the xy plane (ie the conveyor belt plane) $30 to correlate with the coordinate set
7%. cross-sections, longitudinals and rotations of the background sketch D244. The first and second instants of these points are calculated, from which the orientation of the major and minor axes is derived. The closed height profile D247 can then be mathematically rotated such that these axes are aligned with the rows and columns of a temporary image buffer, thereby simplifying item length and width calculations. .
The item length can be defined as the greater of the two dimensions in the x-y plane while the width is defined as the lesser. The item height is also calculated through the histogram of all item height data from the closed height profile and the value finding. 15 near the peak (for example, the 95th percentile).
For subsequent validation of the item during the 145 dimension consolidation process, additional instants can be computed describing the height of the item. After turning the closed height profile D247, the second three-dimensional instants are calculated. In calculating these instants, the item is considered to have uniform density, filled from the top of the measured height to the surface of the belt. The dimension system generates parameters including, but not limited to, instant seconds, which are distinct from those used to determine = item orientation and the width, length, and height, which are stored in a historical database. parameters, along with weight information from the weight sensor and symbols from symbol readers, are used to validate the item.
so - Once a D244 background sketch is complete (in the sense that no more pixels will be associated with that group of pixels), feature extraction is performed to determine the item's orientation, length, and width. In some embodiments, pixels along the outline (perimeter) of a cluster in the x-y plane (ie, the plane of) the ide! dating volume carrier) are analyzed. Pixels within the sketch are treated as filled, even if there are holes inside the interior of the actual item. The first and second instants of these points are calculated, and the major and minor axis orientations are derived. Background sketch D244 is then mathematically rotated such that these axes are aligned relative to the rows and columns of a temporary image buffer, simplifying background sketch length and width calculations. The length, width, orientation of the background sketch, and the second instant, collectively known in the present invention as consolidated data D256, are sent to the item isolation process 146 and the dimension parameter estimation process 157.
Background sketches D244 and closed height profiles D247 are also used in the dimension parameter estimation process 157. The dimension parameter estimation process 157 also receives the ITU value D231 along with the corresponding transport system location D148 related to an item.
In dimension parameter estimation process 157, dimension estimator 154 receives background sketch D244, UII value D231 with transport system location
. D148, and The closed height profile D247 to determine a bounding box for each individual item. In some embodiments, due to the fact that noise from even a single regular pixel could adversely change the measurement, an item length, width, and height are not based on the maximum extent of the aggregated pixels. Instead, the 145 dimension consolidation process computes a histogram that carries numerous pixels in each of the three dimensions after the item has been rotated to the default orientation. Distances are computed between limits of about one percent and about ninety-five percent to generate the item's length, width, and height.
If an item does not produce a background sketch, the only dimensioning data produced by the item is a closed height profile. This can occur, for example, õ if the background of the item is very dark, as perhaps with a jar of grape jam, although supplemental use of the light curtain will tend to address this issue. Feature extraction and item isolation are simply performed on the closed height profile when the closed height profile D247 is the only sizing data produced. If light curtain data and closed height profile are available and camera data is not, then these two can be used. If a group has one or more D244 background sketches & ONE OR More D247 closed height rails, there will be several choices for extracting feature. In one embodiment, the system may ignore background sketches and only operate based on closed height profiles. In other words,
- in this approach, background sketches are only used to aid in the interpretation of dimensioning data collected from closed height profiles. Feature extraction based on multiple closed height profiles is performed just before as it serves a single closed height profile but uses data from the closed height profile group, Pánalmente, if the process, of. pet. dimension parameter 157 has not received a closed height profile D247 corresponding to the transport system location value DI48, the dimension parameter estimation process 157 will only have the background sketch D244 to determine the dimensioning data D166 for the dimension parameter 157. item. For example, a greeting card has a height -” 15 too short to be detected by the dimension sensor 150. Therefore, the height of the item is defined as creo, 8 the length and width of the item are determined simply from the sketch background. The length and width are calculated by rotating and processing the background sketch x,y data as described above for dimension estimator 154 using first and second times. When no closed height profile is available, a second three-dimensional instant is not calculated.
Periodically, the Dimension Parameter Estimation 157 process verifies the location of DI48 transport system, and sends the collected D166 sizing data to Item Description Compiler 200 when it determines that there are no closed D247 height profiles or D247 design sketches. additional D244 fund to be associated with a
- particular item. The 154 dimension estimator also uses the data to estimate various DI166 sizing data including, but not limited to, parameter values related to the general shape of the item (cylindrical, rectangular solid, bottlenecked bottle shape, etc.), the item orientation on the conveyor system, and details related to the item's three-dimensional coordinates on the sensing volume conveyor belt. In this embodiment, the dimension sensor 150 is also capable of calculating other parameter values based on the size and shape of the item. The various DI166 sizing data along with the DI48 transport system location values of the items are sent to the item description compiler 200 as they are calculated.
. 15 Item Insulator 140 Figure 11 also shows the Item Insulator 140, which can allow the system to operate on non-Singularized items. In operation, item isolator 140 recognizes that something (one or more items) has entered the detection volume. During the 145 dimension consolidation process, when the closed height profiles D247 and the bottom sketches D244 overlap spatially (i.e. are at least partially consolidated), these can be associated with a single item, and the item insulator 140 may have isolated an item crossing the detection volume. In the process of Insulating Item 146, Insulating Item 140 consolidates the closed height profile D247 with the background sketch D244, generating consolidated data D256. Due to the way the D244 background sketch and D247 closed height profile descriptions are created, all sketches
. D244 are mutually spatially disjointed, and all closed height profiles D247 are spatially mutually disjoined.
The Qdimension 145 consolidation process waits for an event.
The *145 dimension consolidation" process stores and attends to D247 closed height profiles and D244 background sketches as they are received.
When a new closed height profile D247 is received, the dimension consolidation process 145 checks the collection of background sketches D244 to verify that the closed height profile D247 and a particular background sketch D244 spatially overlap.
Closed height profiles D247 and background sketches D244 that spatially overlap are placed in a group.
Dimension 145 commit process does not check O . 15 closed height profile D247 against other closed height profiles, due to the fact that they are, by definition, disjointed.
Similarly, after a new bottom sketch D244 is received, this is checked against the collection of received closed height profiles D247 to see if the bottom sketch D244 overlaps any closed height profiles D247. During the 145 dimension consolidation process, the item insulator 140 matches the DI48 conveyor system location values of the bottom sketch D244 with any closed height profile D247 that shares substantially the same DI4B conveyor system location values.
At this point, item insulator 140 recognizes the background silhouette of the item and recognizes the height of substantially every point on the item, and is ready to go.
- Submit the DIS6 consolidated data to the item 146 isolation process.
Second, item insulator 140 determines how many distinct items comprise the object that entered the detection volume. In certain cases, several individual items are wrong as a single item in one or the other datasets. The purpose of the item isolation process 146 is to determine when the closed height profile D247 and the background sketches D244 represent the same single item and when they represent multiple items. Thirdly, the item 140 insulator, specifically the item indexer, assigns a Unique Train Indices (ILO) DOS value to each distinct item, and fourthly, along with the UII D231, the insulator of -15 item 140 identifies the two-dimensional location of the item (the DI48 transport system location value). With knowledge: from the consolidated data D256, probably belonging to a single item, the item insulator 140 assigns a ITU value D231 to the consolidated data D256 with known values of transport system location DI48. The item isolation process 146 results in the UII value D231 along with the transport system location D148 which is communicated to the dimension parameter estimation process 157 for further processing by the dimension estimator. The dimension parameter estimation process 157 receives the UI value D2º31, the consolidated data D256 with known values of Transport system location DIS and outputs the dimensioning data DI166 with the UII value D231 t and the transport system location) to other parts of
- system (particularly the item description compiler 200 as shown in Figures 8 & 9). The item 146 isolation process improves system output reliability.
In one embodiment, an item 140 insulator failure stops all system operations, due to the fact that the system cannot verify the number of items in the detection volume or the location of those items, and therefore does not know what do with the data from the parameter sensors.
However, failure of only a portion of the item isolation system need not stop the system.
The Item 146 Isolation Process allows the Item 140 Isolator to continue to function if the Upper Observation Line Scan Camera stops functioning, using ”15” light curtain data and/or D247 closed height profiles for each item.
Adversely, if the 154] dimension estimator fails and the upper sketch observation line scan camera detection and/or the light curtain continues to work, the D244 background sketches but not the D247 closed height profiles will be reported. .
The system may continue to operate in a degraded mode as item heights are not available for item identification.
However, determining the weight, length and width of the item is still possible, and items will not—generally pass through the detection volume undetected, even if a number of exceptions is increased.
Weight sensor 170 Referring now to Figure 12, a schematic illustration of weight sensor 170 is shown.
The weight sensor 170 includes a moving scale 172 and a generator of
- weight 174. Moving scale 172 includes object sensors (input conveyor belt object sensor 173A, sensing volume input object sensor 1798, and sensing volume output object sensor 173C are shown) and load cells 175A, 175B, 175C, and 175D.
Object sensors, such as the 173A input conveyor belt object sensor, 173B sensing volume input object sensor, and 173C sensing volume output object sensor, allow the weight generator to track which items are on the moving scale 172 at a given time. The sensing volume inlet object sensor 173B is positioned near the inlet end of the sensing volume. The 173C Sense Volume Output Object Sensor, positioned close to the Sense Volume U output end, along with the 173B Sense Volume Input Object Sensor provides loading information to allow the system to accurately calculate weight of multiple items in the detection volume at a given time. The 173A Inlet Conveyor Belt Object Sensor is positioned several inches upstream of the inlet end of the sensing volume conveyor belt and allows for an optional mode of operation in which the inlet conveyor belt can be stopped.
In other words, the inclusion of object sensors allows the system to estimate the weight of most individual items by combining the total instantaneous weight on the volume conveyor belt of
. detection (not shown in Figure 12) with the item DI48 transport system location values. However, in some embodiments, the accurate weight data D191 cannot be measured by the weight generator 174 when items enter the detection volume while other items are leaving. Therefore, in these modalities, object sensors can be employed to prevent simultaneous loading and unloading of moving scale items.
172. In other words, object position logic 176, upon receipt of conveyor system location DI48 and data from input conveyor belt object sensor 173A, sensing volume input object sensor 173B, and of the sensing volume output object sensor 1790, can determine that a . 15 item will be entering the sensing volume at the same time an item is leaving the sensing volume and may signal the transport system to withhold the passage of any items into the sensing volume if there is an item about to leave the sensing volume. detection. In other embodiments, the object position logic may also stop the sensing volume conveyor belt if, for example, the scale does not have time to settle after loading a new item. Object position logic 176 transmits start and stop signals DI15 to the weighting and differentiating process 178 where the logic averages and changes in initial capture data received from load cells 175 to ensure that the calculations are performed on the appropriate time. It will be noted that the stop and start of the conveyor belt to hold the items of
: loading/unloading detection volume has no negative effect on measurements made by the system; From the sensing volume stop perspective, the inlet conveyor belt only spreads the items on the sensing volume conveyor belt while the sensing volume conveyor belt stop puts all digital processing steps into a suspend mode that can be reset when the belt is reset.
As shown in Figure 12, object position logic 176 additionally uses information received from object sensors along with the DI48 conveyor system location to issue D50 belt control commands. These commands are sent to . 15 location sensor transport system 120 (Figure 9) in which, in one embodiment, the k motor controllers reside. For example, using information received from sensing volume object sensor 173C, object position logic 176 can determine that an item is about to leave the sensing volume. In order to prevent an item from entering the sensing volume at the same time, object position logic 176 may send a belt control command D50 to prevent the inlet conveyor belt from continuing to transport items towards the sensing volume. detection. Additionally or alternatively, the D50 belt control commands may include increasing or decreasing the speed of conveyor belts in order to limit the number of items that an operator of system 25 can physically place on the incoming conveyor belt. so
"Similarly, in some embodiments, the moving scale 172 may require periodic self-calibration time during which no items are allowed on the sweep tunnel conveyor belt, allowing it to return to its tare weight in order to maintain accuracy. calibration is achieved by stopping the incoming conveyor belt. Other D50 belt control commands may be transmitted by object position logic 176, depending on the specific application contemplated. Load cells 175A, 175B, 175C, and 175D are arranged in the load path and typically support the sensing volume conveyor belt (not shown in Figure 12, but shown at least in Figure 2B) Each load cell generates an electrical signal - 15 proportional to the compressive force applied to the cell In some embodiments, the 175A, : 175B, 175C, and 175D load cells are scanned at a high sample rate (for example, 4000 samples per second) before being transmitted for processing by weight generator 174.
Load cell samples with high sample rate are received by the summation process 177, where the signals from the load cells are summed and scaled to represent the total weight data of the moving scale 172 and any items on the moving scale 172. The total weight data DI90 from the summing process 177 is optionally sent to the history database in step DI90. Additionally, this sum is low-pass filtered (or weighted) to improve the signal-to-noise ratio and generate a more accurate total weight on s1 . differentiation and weighting process 178. The number of digital samples included in the average calculated during the differentiating and weighting process 178 is limited by the number of samples taken while the weight on the moving scale 172 is stable.
For example, if only one item was loaded onto the sensing volume conveyor belt, the stable period extends from the time an item is completely on the sensing volume conveyor belt to the time the item begins to move out of the sensing volume conveyor belt.
When more than one item is on the sensing volume conveyor belt at a given time, stable periods are limited to the times when no items are being carried on or moving off the sensing volume conveyor belt.
In a noise free environment, the weight generator could identify stable periods from the data alone.
However, the weight generator typically operates in the presence of some, if not a significant amount of noise.
Object sensors 173A, 173B, and 173C therefore inform the weight generator (via object position logic 176) when items are loading or unloading from the sensing volume conveyor belt for proper weighting.
It should be noted that although the language in the present invention suggests timing considerations, in one embodiment, the system process does not include a clock signal, but rather is only pulsed by incremental movements of the sweep tunnel conveyor belt.
In this way, a stable period can be extended by stopping the conveyor belt of the scanning tunnel and the actual number of samples on average will continue to increase at the data sample rate (4000 samples per second in one embodiment).
Additionally, the Differentiating and Weighting process 178, as commanded by the start and stop signals D115, performs a differentiating operation between the weight values obtained before an item was loaded/unloaded from the scale 172 and after an item was loaded/unloaded. unloaded from scale 172. Weight values obtained in this way are assigned to the loaded/unloaded item or items from scale 172 during instant transition. There are several alternative approaches to performing the differencing function that can be used to achieve essentially the same DI91 weight data. Selection from among these alternatives is generally determined by the hardware and digital processing resources available and operating conditions (eg, load cell signal-to-noise ratio, load cell displacement, etc.). A particular approach is discussed below in conjunction with Figure 13. Returning to Figure 12, the DI191A weight values are transferred from the differentiation and weighting process 178 to a weighting process 179, where the DISIA weight values are combined. with object position data D113, which is data that was generated by object position logic 176. It should be noted that object position logic 176 cannot identify individual items in an overlap condition. The DI1I3 object positions are determined by combining the
- signals in and out of object sensors with DI48 transport system locations. The combination of item weights and object positions are the item weight data DI91. For non-overlapping items the item weight data is the item weight; for overlapping items the item weight data is the combined weight of more than one item. Item weight data DI91 is passed to item description compiler 200. Optionally, the continuous stream of total weight data DI90 is sent to history database 350 (as shown in Figure 8).
As mentioned above, several approaches are available for calculating the weight of individual items at scale 172. Figure 13 provides timing diagrams that schematically reveal each output of a . 15 element of an embodiment of the weight sensor 170 that is schematically illustrated in Figure 12. The first line of data at the top of Figure 13 provides an example of a summing process output 177. The second line of data of Figure 13 provides an example of an input conveyor belt object sensor output 173A. The third line of data in Figure 13 provides an example of an output from sensing volume input object sensor 173B. The fourth data line of Figure 13 provides an example of an output from the detention volume output object sensor 1730. The first data line of Figure 13 illustrates the altered, summed and digitized load cell signals as a function of time, in which the “constant speed of the transport system” is considered. The second, third and fourth rows of data
- from Figure 13 show the output (binary) of the three object sensors.
In the second data line of Figure 13, item À is shown first detected by input conveyor belt object sensor d73A, in the third to fourth time slot.
While item A remains on the input conveyor belt (as shown detected by the input conveyor belt object sensor 173A), the first line of data shows that the weight sensor 170 does not detect a weight value as shown by the constant (0,0) from clock start at zero to 5th time interval.
As itom À enters the conveyor belt sensing volume shown in the third row of data from the fifth second to the sixth. 15 time interval, the sensing volume input object sensor 173B detects the presence of item A.
The weight of Item A is recorded by the weight generator as shown from about point (5.0) to about point (6.3) in the first data line in Figure 13. After item A has completely crossed the belt span and is entirely located on the sensing volume conveyor belt, the weight sensor 170 shows the weight of item A as static from about point (6.3) to about point (11.5, 3 ). Aided by the item position logic 176, the differentiation and weighting process 178 weights the load cell signals during the first indicated acceptable weighting window and adopts the difference between the weight value 3 obtained at the end of said first acceptable weighting window. acceptable weighting, and the value of weight O, obtained after
: before loading item A into the scale (as indicated by object sensors 173A and 173B). As shown in the second line of data, from the ninth and a half timeslot after system start to nearly the eleventh time slot, the input conveyor belt object sensor 173A detects the presence of another item, B, on the inlet conveyor belt.
As dictation B enters the sensing volume on the sensing volume conveyor belt, the sensing volume input object sensor 1738 detects the presence of item B from about 11.5 to about 13.5 in the sensing range. x-axis time.
The total weight of items A and item B is recorded by the weight sensor 170, as shown from about point (11,5,3) to about :15 of (13,5, 9) in the first data line.
After item B has completely crossed the belt span and is entirely located on the sensing volume conveyor belt, the total weight of items A and item B static, from about point (13.5, 9) to about point (20, 9). Aided by the ebieto position logic 176, and Differentiation and Weighting Roessso 178 weights the load cell signals during the indicated second acceptable weighting window and adopts the difference between the weight value 9 obtained at the end of said second weighting window. acceptable weighting, and the weight value 3, obtained previously for item A.
That is, since the weight sensor 170 knows that item A weighs about three units, and the aggregate weight of items A and item B is nine units, then the system calculates that item E weighs about six units. .
. As shown in the fourth data line of Figure 13, from the twentieth time interval to the twenty-first time interval after system start, the sensing volume output object sensor 173C detects the presence of item A leaving the volume of detection on the conveyor belt of volume detection.
As item A leaves the sensing volume on the outgoing conveyor belt, the weight sensor 170 detects a decreasing weight value from about point (20, 9) to about point (21, 6). The weight sensor 170 can thus check the weight of item A.
Since the weight value dropped from about nine units to about six units when item A left the detection volume, item A weighs about 3 units. - 15 After item A has completely traversed the detection volume and is entirely located on the exit conveyor belt, the weight sensor 170 shows the weight of item B as static, from about point (21, 6) to about from the point (27, 6). Again, the weight sensor 170 can verify its first calculation of the weight value for item B by detecting a static weight value of about six units during the time period when only item B is detected on the conveyor belt. detection volume.
As shown in the fourth linear graph, from the twenty-seventh time interval to the twenty-ninth time interval after system start, sensing volume output object sensor 173C detects the presence of item B coming out of sensing volume on the belt detection volume carrier. ONLY.
As item B leaves the detection volume on the belt of the
: output conveyor, weight sensor 170 detects a decreasing weight value from about point (27, 6) to about point (29, 0). Subtracting 6 from O verifies that the item just out of the detection volume (item B) weighs 6 units.
Load cell weight sensors often exhibit zero displacement deviations over time and temperature variations. This potential deviation is shown schematically in the first line of data in Figure 13 for time intervals beyond 29. In one embodiment of the system, this deviation is automatically reset during periods when no items are on scale, as aided by the position logic of object
176.
: 15 The calculation approach described above may fail to operate properly when an item is loaded onto a ; scale at the same time a second item is unloaded. To avoid this condition, in one embodiment of the object position logic 176, an E condition for input conveyor belt object sensor 179A and for sensing volume output object sensor 173C generates a command to stop the conveyor belt. inlet conveyor until the existing item has cleared the detection volume. This D50 belt motor control command can be transmitted to transport sensor processor 127 (Figure 9), where the motor controllers reside for convenience.
As mentioned, there are multiple alternative approaches to processing the D190 total weight signals to estimate the weight of individual items when they are not
- gSingularized in scale, generally including. make weight estimates before, during and/or after each item enters and/or leaves the scale. In addition, alternative approaches exist which, under certain operating conditions, can estimate the weight of individual items even if they partially overlap. For example, considering the total weight values illustrated in the first data line of Figure 13. The slopes of the transition lines between the acceptable weighting windows are proportional to the weights of the items being loaded or unloaded from the scale. When there are two partially overlapping items loading over to scale, the slope of the transition line changes as the number of items that are loaded changes. - 15" So in a noise free environment it's a trivial exercise to share the total measured weight during the stable period for the two overlapping items that are loaded onto the scale.
Geometric Consolidation Process That Occurs Within the Item Description Compiler 200 Figure 14 is a data flow diagram for an item description compiler 200 that drives the geometric consolidation process. The item description compiler 200 aggregates the parameter values corresponding to an individual item into an itemy description where the parameter values are received from the various parameter processors. In the embodiment revealed in Figure 14, the parameter values are shown as the UII value D231, dimensioning data D166, weight data DI91, and digital symbol data D235, but others
: parameter values are contemplated in the present invention.
Each parameter value as presented to the description compiler includes its corresponding DI48 transport system location values. The Item Description compiler 200 uses these location values to match parameter values that apply to a single item.
The set of parameter values matched is the item description.
The item description, when judged complete by the item description compiler 200, is then
provided to the product identification processor.
Item Description Compiler 200 uses a geometric-based data binding technique, which uses the object binding library described above to aggregate the produced item parameter values - 15 asynchronously.
Time can be used to correlate the various parameter values with a unique item, but, | Due to the fact that the various parameter values may have been produced at different times as the item is moved through the scan tunnel, and due to the fact that the belt speed may not be constant, this approach can be difficult to implement. .
However, the transport system location where each item is placed is a fixed parameter associated with that one item (once it enters the scan tunnel), as is the value of transport system location, relative to a location of known reference at which each sensor takes its measurement.
Therefore, each measured parameter value can be matched to the item that was at the sensor location at the time of measurement.
. During system operation, the transport location sensor 120 (shown in Figure 9) is continuously supplying a transport system location value to each parameter processor. Each parameter processor identifies the parameter values it produces with the transport system location value corresponding to the time at which your initial stump data was collected. Additionally, the item insulator 140 and the dimension sensor 150 (both shown in Figures 9 & 11) provide a complete three-dimensional location for each isolated item, which means they provide the attem description compiler 200 with the mathematical description of where the surfaces of each item are in camera space. The 250 calibration data library 15 is a record where in physical space each pickup element in each parameter sensor is targeted. The transformation process 202 converts the mathematical description of the surfaces of each item from camera space to physical space with precise spatial positioning information (x,y,2).
The transformation process 202 uses detailed knowledge of each parameter sensor's three-dimensional visual field (e.g., the vector describing where each pixel in each line-scan camera is pointed in three-dimensional space). With this information, the item description compiler 200 can associate data from the multiple parameter sensors to the item that was at a particular transportation system location, as long as the spatial uncertainty of each measurement coordinate can be kept sufficiently small. In one mode, all
. Spatial measurements are known for accuracies generally less than about two-tenths of an inch. The smallest features that require spatial association are Symbols," which, "in practice, measure at least about two-tenths of an inch in their smallest dimension even with minimal monores line widths, which are about der unils specified by the GSl standard. Consequently, even the smallest symbols can be uniquely associated with the spatial accuracies of the described modality.
The first step that is able to spatially associate parameter values to a particular item is to calibrate the absolute spatial positions of each parameter sensor measurement. For example, line scan camera symbol readers - 15 front left transmit digital symbols value, along with the line scan camera pixel number ' from the center of the symbols and the DI48 transport system location value in which the camera was triggered by reading the first corner of the symbols. The item description compiler takes this information and transforms the pixel number and transport system location into absolute spatial coordinates.
For the symbol reader, the pixels corresponding to the four extreme points that define the edges of the visible plane inside the detection volume are identified through the precise positioning of two image targets, one at each end of a given camera (at the extreme ends detection volume), and as close to the line scan camera as possible within the detection volume. The pixels that form images
- of these targets define the two points near the end of the visible image plane. The process is repeated for the two extreme points at the far end of each line scan camera visual field.
For example, for side scan cameras, targets are placed just above the sensing tunnel conveyor belt and at Maximum item height, as close as possible to the inlet mirror within the sensing volume. The same targets are imaged at the distal end of that line-scan camera swath. The (x,y,z) coordinates for each test image target are recorded, along with the particular camera and camera pixel number in which the image of each target appears. The three coordinates define the imaging plane. 15 for that camera. Through interpolation or extrapolation, the imaging radius for any pixel comprising that line-scan camera can be derived from those four points, and the three coordinates reported from that line-scan camera from which a marker area with the optical radius along which they were imaged can be mapped. Precise (x,y,2) spatial positioning information is known for each imaaelm target during geometric calibration. In some embodiments, the coordinate system is as illustrated in Figures 3A and 3B. Geometric calibration is performed manually, without making use of data from the Transport Journalization sensors, although the dimension estimator uses that data for its own processing. However, automated geometric calibration is also possible, using data from the
- transport location sensor. In one embodiment, the geometric calibration data is stored in a library 250. However, it should be clear that the D201 geometric calibration data is not a required element in all embodiments. In those embodiments where it is present, the geometric calibration data D201 is transferred from library 250 to transformation process 202 in item description compiler 200. Although the line scan camera radius alone does not uniquely define the exact point in space where the marking area was located, the line-scan camera ray intersects the three-dimensional representation of the item itself, as provided by item insulator 140 and dimension estimator 150. Together, the . 15 line-scan camera radius and three-dimensional item representations create a one-to-one correspondence between markup areas and items.
Another parameter sensor that uses a geometric calibration level is the weight estimator. In the described embodiment, the weight estimator obtains item X axis position information from its object sensors. That is, in terms of Figure 14, the weight estimator assigns a weight value to item A or B based on the output of at least the sensing volume input object sensor, which indicated where along the virtual belt the items were first loaded onto the scale. Object sensor positions can be manually calibrated by simply measuring their distances from dimension estimator coordinates, or automatically calibrated using motion calibration items and
- instant transport system locations reported by transport location sensor. It will be noted that, in the illustrated embodiment, items are loaded on the moving scale 172 before being observed by the area camera 152. Similarly, the upper observation line scan camera 88 (shown at least in Figure 4A) may Tear to item marking areas before being observed by area camera
152. Therefore, weight measurements and marking area readings can be taken before dimension sensor 150 and item insulator 140 (shown schematically in Figure 11) determine which items are in the scan tunnel. In fact, the product identification function of the system would perform, just as with the 150 - 15 dimension sensor and the 132A upper sight line scanning camera located at the end of the scanning tunnel, as well: as with those sensors at the front. of the scan tunnel. The forward-facing location of these two sensors is preferred only to minimize the processing delay required to produce an ID, i.e. the product ID can be produced shortly after the item exits the tunnel when data is collected in front of the barrel before at the end of the tunnel.
The weight estimator only knows the location of geometric sixo X of the items it weighs. Two items that overlap side by side (that is, have common X locations but different Y locations) on the moving scale can be difficult to weigh individually. Therefore, the weight reported in this case is an aggregate weight of all items side-by-side in the system location.
105 | - ivelor transport x). When a weight value arrives at the item description compiler 200 (shown schematically in Figure 14) with a transport system location that is combined with more than one item, the item description compiler 200, in some embodiments, adds that weight value to the item description of each item D167, along with an indication that it is an aggregate weight. In other embodiments, the unique item identifier(s) for the other side-by-side items are also added to the item description DI67, for reasons described below.
The various parameter values that are transformed through the transformation process 202 become spatially transformed parameter values: 15 D70, which are then applied to an information queue
207. The information queue 207 is a random-access temporary store, that is, it does not operate on a first-in, first-out system. Because there are, in general, multiple items on the sensing volume conveyor belt, and because each parameter sensor sends its sensed parameters as soon as it recognizes them, the information queue 207 at any point in time contains spatially transformed parameter values D70 of multiple items arranged in order of arrival. Because, for example, the latency between the time an item's markup area physically passes through a camera scanning the line's field of view and the time at which the markup area reader produces the corresponding markup areas value is highly variable, it is even possible that some transformed parameter values
. spatially = D70 may not be recognized or interpreted until long after the item has left the system 25. The item description compiler 200 seeks to determine which of the spatially transformed parameter values D70 reported in information queue 207 was measured at the surface or in the location of the item through the process of geometric union or geo-parameter compatibility.
Processor data joining process: Item Identification 300 depends on Dimension Sensor 180 and Item Isolator 140. Item Isolator 140 determines which items are in the detection volume (and gives them a unique tracking number , the UII) and the dimension sensor - 15 150 creates dimensioning data, including, but not limited to, the closed height profile with the corresponding lower É contours.
Together, the data from item isolator 140 and dimension sensor 150 form the baseline input in the item desertion DI67 being created in item description compiler 200. Other parameter values are identified as belonging to the item and are added to the description of dtem Dlef.
In some embodiments, the data joining process 215 receives locations from the DI48 transport system and delivers D149 image retrieval requests to the region extract process 138 of the tag area reader 130 shown in Figure 10. As mentioned above, the Parameter values are received by the item 200 description compiler from the various parameter sensors, are subverted to the
- transformations 202 and are temporarily placed in an information queue 207. As the item description compiler 200 assembles an item description DI67 through making the data union process 215 combine spatially transformed parameter values D70 with the same data location DIZA transport system, it sends a data request DI69 to the information queue 207 to remove the spatially transformed parameter value D7O from the information queue 207 to place it in the appropriate item description DI67. Therefore, the spatially transformed — parameter values D/0 are continually added to and deleted from the information queue 207.
Finally, the item description DI67 is sent. 15 to item id processor 300.n Item description compiler 200 sends item description file D167 to item id processor 300 at a point in processing, based on one or more selected criteria. Criteria may include, for example, submitting item description DI67 when the current transport system location exceeds the item location by more than about 25% of the detection volume length. In one embodiment, the criteria sent may correspond to a belt position less than or equal to a particular distance from the end of the output belt. Some parameter values are never associated with any item and may be called orphaned values. Orphaned values are created if, for example, a parameter value is delayed by a processor restart
- or if the Dl48 transport system location value is defective. Likewise, where an item moves relative to the carrier, for example a rolling bottle or can, certain values can become orphaned. An accumulation of unmatched parameter values in information queue 207 has a tendency to degrade system performance. In some embodiments, the item description compiler 200 may include functionality to exclude parameter values from the information queue 207 by a certain selected period of time. The determination of exclude values and parameter depends on but the virtual location of the new spatially transformed parameter value D70 arriving at the information queue 207 is significantly beyond the length -15 of the outgoing feed conveyor belt, for example. This condition would indicate that the orphan value is . associated with an item that has long left the detection volume.
Item ID Processor 300 Figure 15 is a data flowchart for Item ID Processor 300. Item Description Compiler 200 creates an Item Description D167 for each item isolated by the Item Isolator. Item ID processor 300 opens a file for each item description DI67 supplied to it by item description compiler 200. Item description D1I67 includes a list of all available measured parameter values collected by the system. The basic function performed by the item identification processor 300 is to compare to the item description DIG7T & a set of
.: product descriptions, stored in the product description database 310, and decide, according to predetermined logic rules, whether the item is one of those products. In some embodiments, the product descriptions > in the product description database 310 contain the same type of information about the products that was collected about the items. Typically, product descriptions include digital marking area values, weight data, and sizing data about the products. In some embodiments, the product description may comprise other parameter values of the products, statistical information about the various parameters (e.g. standard deviation of weight), digital photographs of each product, etc. - 15 In one embodiment, a polygonal representation of an item can be generated for each camera's focal plane space. Therefore, for each object, there are multiple polygons generated, corresponding to each of the camera views of that object. As an example, for a system that has seven perspectives, seven polygons would be generated and stored for use in the joining process, as described below.
The item identification processor 300 attempts to determine the best match between the unknown item parameter values and the database of (known) product parameters. In some embodiments, the markup area value (typically the UPC) is used as the primary database request. Assuming an exact marking area match is found in the product description database, the
: Item ID processor 300 examines the remaining parameter values to decide whether the item is the product represented by the tag areas. This is a validation that the UPC was not misread or not > destroyed. As outlined above, partial UPCs (or other codes) may be further evaluated to narrow down a number of possible item choices, and, in one embodiment, a small number of choices may be passed to an operator for resolution.
Item description DI67 is provided for a Formulated Database Request Process 305, which compares available item parameters to determine based on, for example, a given marking area, weight, and height, which item is when a request D209 was - 2 formulated, the formulated database request process 305 delivers it to the description database; 310, which in turn provides a request result D210 to a product identification logic process 312. The product identification logic 312 compares the DIO request result, which is a product description, to the description of original DIGT itam to decide if the two descriptions are similar enough to declare an identification.
Item ID processor 300 is preprogrammed with a set of logic rules by which this validation is performed. In some modalities, the rules are deterministic (for example, the weight has to be in xi of the nominal weight, for 10. product). » In other modalities, rules can be determined using, for example, fuzzy logic.
. Fuzzy or adaptive logic can be used particularly in product identification logic 312 to handle unusual situations. For example, some items will have multiple digital markup area values and certain products will be known to have multiple visible markup areas, as multiple line-scan cameras produce images of each item and as some items have two or more markup areas. different markings (for example, a multi-pack of water, where each bottle can have a barcode, and the cover of the multi-pack can have a different barcode). In this example, fuzzy logic might perform better than a strict rule that governs how to handle conflicting information.
. 15 While in some embodiments the value of the digital tag areas may be a preferred parameter value for the database lookup, there are cases where the formulated database request process 305 uses one or more of the other parameter values on a first attempt to identify an item. For example, where marking areas are misread or have been partially or completely obscured from the line scan camera, the formulated database request process 305 is programmable to use the other previously described parameter values to identify with accurately the item as a product. For example, if the weight, shape, and size of the product has been measured with a high degree of certainty and some of the bar code digits are read, this data can provide a sufficiently unique product identification.
- The output of the product identification logic 312 is either a product description with an identification probability or an exception flag that indicates that no compatible products were found.
A mismatch can occur, for example, where an item is scanned which has never been entered into the database.
This output is transferred to a 314 product/exception decision process where a programmable tolerance level is applied.
If the identification probability is above this tolerance, product identification data D233 and UII value D23l are output.
In typical embodiments, the identification output is applied to a POS 400 system. On the other hand, if the identification probability is below the tolerance level, then . 15 5 Product/Oxation Decision Process $14 associates an exception flag D232 with the UII D231. Optionally, in : some embodiments, when an item is marked as an exception, the UII D231 is applied to an exception handler 320. The oncinal exception handler 320 may include doing nothing (eg, letting the consumer have that item for free) , provide a system operator with an indicator to act on, or it could involve performing an automatic rescan.
Another optional function that is part of the item identification processor is the ability to update the product description database based on the parameter values of the new item.
For example, the average and standard deviation of the weight of the product, which are typical parameters stored in the product description database 310, can be refined with the new weight data collected.
- each time that particular product is identified. In some embodiments, item identification processor 300 updates its product description database 310 with each parameter value it receives for items that pass through the detection volume. The database update process 313 receives UII D231 and item description DI67 from the database request process formulated 305 and performs the database update when it receives product description D233 and ITU D231 from the decision process 314. The database update process 313 also receives warning when UIT D231 is an exception (tag D232) so that it can purge inaccurate product descriptions DI167 associated with the UII exception D231.
.- 15 Prior to multi-read disambiguation, the Consolidator employs a single-pass “best match” algorithm to assign barcodes to an item at its programmed output position (ie, the Y belt position at which the Consolidator sends information to qm datem to the output subsystem for subsequent transmission to the POS). The best match algorithm for barcodes takes as input 1) a single item for which output should be generated, 2) an item domain consisting of all items to consider when identifying the best match between barcodes and item - where the output item is also part of that domain, and 3) a barcode domain consisting of all barcodes available to be assigned to the output item.
. The algorithm works by visiting each barcode in the barcode domain in turn and computing a compatible metric (Coefficient of Merit - FOM) between the barcode and all items in the stocked item domain. Once all barcode and item associations have been computed, the algorithm discards all associations with FOM values that are below a specific threshold (this threshold can be reached heuristically, and can be updated according to world performance real, or as a User setting, or automatically). All remaining barcode and item associations are then sorted according to the distance along the camera radius and the association with the shortest distance is considered to be: 45 being the best match (the logic being that it is not it is likely to read a barcode on an item that Z is behind (Sds Soutro' átem = Spbr, so the barcode closest to the camera lens is more likely to be associated with the item in front). If an item identified as the best match is the same as the output item, the barcode is assigned to the output item.
Otherwise, the barcode is not assigned. Although in the above specification this invention has been described in relation to certain particular embodiments thereof, and many details have been presented for purposes of illustration, it will be apparent to those skilled in the art that the invention is susceptible to change and that certain other details described in this document may vary—considerably without departing from the basic principles of
. invention.
Furthermore, it should be appreciated that structural features or method steps shown or described in any embodiment of this document may be used in other embodiments as well.
权利要求:
Claims (14)
[1]
1. System for asynchronously identifying an item within a detection volume characterized in that it comprises: a plurality of object sensors, each object sensor being configured and arranged to determine at least one parameter describing objects as they are relatively moved with respect to the detection volume j, and having a known position and attitude with respect to the detection volume; a position sensor configured and arranged to produce position information regarding relative motion, wherein the position information does not comprise fg system clock information; a processor, configured and arranged to receive parameters from the object sensors and to associate the parameters with respective objects among the objects based on position information and based on the known position and attitude of the object sensor that determine each respective parameter, without regard to system clock information, and to compare, for each object that has at least one associated parameter, the at least one associated parameter to known item parameters to assign an item ID to the object.
[2]
2. System according to claim 1, characterized in that the object sensors additionally comprise: a height sizing sensor comprising a substantially flat light source, configured and arranged to project flat lighting at an angle to a path of objects during their relative movement in the detection volume;
an associated height sizing detector constructed and arranged to detect a reflection of plane lighting; and wherein the processor is configured and arranged to determine, based on the detected reflection and angle, a height profile of each object.
[3]
3. System according to any one of claims 1 and 2, characterized in that the object sensors additionally comprise: a pair of area sizing sensors, each area sizing sensor being configured and arranged to determine an instantaneous width of the object as the object moves relative, passing through a substantially flat visual field of each respective area sizing sensor, wherein one of the area sizing sensors comprises a field imaging sensor light and the other of the area sizing sensors comprises a dark field imaging sensor and the processor determines the instantaneous width based on an output from one or both of the area imaging sensors.
[4]
4. System according to claim 3, characterized in that it additionally comprises a discriminator object that is configured and arranged to single out objects based on object contours created from a plurality of instantaneous widths measured by sizing sensors.
[5]
5. System according to any one of the preceding claims, characterized in that a plurality of object sensors comprise line-scan cameras and the processor is additionally configured and arranged to process images captured by the line-scan cameras. to identify a marking area for each object.
[6]
6. System according to claim 5, characterized in that the marking area comprises a bar code and the processor is configured and arranged to identify the bar code.
[7]
7. System according to claim 6, characterized in that the barcode additionally comprises characters and the processor is additionally configured and arranged to identify the characters of the barcode.
[8]
8. System according to claim 7, characterized in that the barcode characters are identified using an algorithm : selected from the group consisting of an algorithm for optical character recognition and an algorithm f : matching that is based on a comparison between the character format and a library comprising selected possible character formats.
[9]
9. Method for asynchronously identifying an item within a detection volume characterized in that it comprises: determining at least one parameter that describes objects as they are relatively moved relative to the detection volume, using a plurality of object sensors, each having a known position and attitude with respect to the sensing volume; producing position information regarding relative motion, where the position information does not comprise system clock information; and associate the parameters in respective objects among the objects based on the position information and based on the known position and attitude of the object sensor that determine each respective parameter, without considering
. system clock information, and to compare, for each object that has at least one associated parameter, the at least one parameter associated with known item parameters to assign an item ID to the object.
[10]
10. Method according to claim 9, characterized in that it additionally comprises: projecting plane lighting at an angle to a path of objects during their relative movement in the detection volume; detect a reflection of plane lighting; and determining, based on the detected reflection and angle, a height profile of each object.
[11]
x 11. Method according to claim 9 or 10, characterized in that it comprises, z additionally: determining an instantaneous width of the object as the object moves in a relative way, passing through a substantially flat visual field of each one of a pair of area sizing sensors, wherein one of the area sizing sensors comprises a brightfield imaging sensor and the other of the area scaling sensors comprises a darkfield imaging sensor, based on an output from one or both area imaging sensors.
[12]
12. Method according to any one of claims 9 to 11, characterized in that it additionally comprises: singling out objects based on object contours created from a plurality of instantaneous widths measured by sizing sensors.
[13]
13. Method according to any one of claims 9 to 12, characterized in that
. additionally comprises: processing images captured by object sensors to identify a marking area for each object.
5
[14]
14. Method according to claim 13, characterized in that the marking area comprises a bar code and also comprises characters and the processing images comprise identifying the bar code characters using an algorithm selected from the group consisting of an optical character recognition algorithm and a corresponding algorithm that is based on a comparison between the character format and a library that i comprises selected possible character formats. i
类似技术:
公开号 | 公开日 | 专利标题
BR112012022984A2|2020-08-25|system and method for product identification.
US9651363B2|2017-05-16|Systems and methods of object measurement in an automated data reader
US8396755B2|2013-03-12|Method of reclaiming products from a retail store
US7137556B1|2006-11-21|System and method for dimensioning objects
US10663590B2|2020-05-26|Device and method for merging lidar data
EP2668614A2|2013-12-04|Exception detection and handling in automated optical code reading systems
JP2008151764A|2008-07-03|Dimensioning and weighing system
US20090160975A1|2009-06-25|Methods and Apparatus for Improved Image Processing to Provide Retroactive Image Focusing and Improved Depth of Field in Retail Imaging Systems
US9053379B2|2015-06-09|Single arch portal scanner and method of scanning
CN109414819A|2019-03-01|The robot obtained for automated image
AU2019247400B2|2021-12-16|Method, system, and apparatus for correcting translucency artifacts in data representing a support structure
CN105654150B|2018-03-27|A kind of method of batch recognition object information and precise positioning
RU2574806C2|2016-02-10|Method and system for identifying product
US20220083959A1|2022-03-17|System and method for detecting products and product labels
WO2020210820A1|2020-10-15|System and method for determining out-of-stock products
WO2013106446A1|2013-07-18|Method of reclaiming products from a retail store
同族专利:
公开号 | 公开日
US9122677B2|2015-09-01|
WO2011113044A2|2011-09-15|
JP2013522143A|2013-06-13|
WO2011113044A3|2012-02-23|
RU2012139044A|2014-04-20|
CA2792774A1|2011-09-15|
CN102884539A|2013-01-16|
AU2011226646B2|2015-04-02|
EP2545501B1|2018-09-05|
US8469261B2|2013-06-25|
CA2792774C|2016-11-08|
KR20130049773A|2013-05-14|
US20130248593A1|2013-09-26|
AU2011226646A1|2012-10-04|
EP2545501A2|2013-01-16|
ES2701024T3|2019-02-20|
CN102884539B|2016-08-03|
KR101777556B1|2017-09-12|
US20110248083A1|2011-10-13|
JP5814275B2|2015-11-17|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

JPS61128108A|1984-11-28|1986-06-16|Canon Inc|Light detector|
US4972494A|1988-02-26|1990-11-20|R. J. Reynolds Tobacco Company|Package inspection system|
JPH05332717A|1992-05-29|1993-12-14|New Oji Paper Co Ltd|Sensor device|
US5497314A|1994-03-07|1996-03-05|Novak; Jeffrey M.|Automated apparatus and method for object recognition at checkout counters|
US6554189B1|1996-10-07|2003-04-29|Metrologic Instruments, Inc.|Automated system and method for identifying and measuring packages transported through a laser scanning tunnel|
JPH11183159A|1997-12-22|1999-07-09|Toshiba Corp|Image processor and its method|
US20090134221A1|2000-11-24|2009-05-28|Xiaoxun Zhu|Tunnel-type digital imaging-based system for use in automated self-checkout and cashier-assisted checkout operations in retail store environments|
JP2001043409A|1999-07-30|2001-02-16|Yamato Scale Co Ltd|Article discriminating device and size measuring instrument|
AUPQ212499A0|1999-08-10|1999-09-02|Ajax Cooke Pty Ltd|Item recognition method and apparatus|
US6484066B1|1999-10-29|2002-11-19|Lockheed Martin Corporation|Image life tunnel scanner inspection system using extended depth of field technology|
AU8110101A|2000-08-03|2002-02-18|Qualcomm Inc|System, method, and apparatus for electromagnetic compatibility-driven product design|
US7337960B2|2004-02-27|2008-03-04|Evolution Robotics, Inc.|Systems and methods for merchandise automatic checkout|
CN101013079B|2007-02-07|2010-11-10|浙江大学|Small-sized material digitalized detecting and grading apparatus|
JP4966096B2|2007-05-28|2012-07-04|パナソニック株式会社|Optical cutting three-dimensional measuring device|US7770799B2|2005-06-03|2010-08-10|Hand Held Products, Inc.|Optical reader having reduced specular reflection read failures|
US8396755B2|2008-07-14|2013-03-12|Sunrise R&D Holdings, Llc|Method of reclaiming products from a retail store|
EP2803026A4|2012-01-09|2015-08-26|Sunrise R & D Holdings Llc|Method of reclaiming products from a retail store|
WO2012103139A2|2011-01-24|2012-08-02|Datalogic ADC, Inc.|Systems and methods of capturing security images in an automated data reader|
JP5803155B2|2011-03-04|2015-11-04|セイコーエプソン株式会社|Robot position detection device and robot system|
TW201342261A|2012-04-02|2013-10-16|Hon Hai Prec Ind Co Ltd|System and method for automatically controlling quality of products|
US8959065B2|2012-04-09|2015-02-17|Mitek Analytics, LLC|System and method for monitoring distributed asset data|
US8733656B2|2012-05-22|2014-05-27|Cognex Corporation|Code and part associating method and apparatus|
US20140175289A1|2012-12-21|2014-06-26|R. John Voorhees|Conveyer Belt with Optically Visible and Machine-Detectable Indicators|
EP3005231B1|2013-05-29|2021-10-06|Sicpa Brasil Indústria de Tintas e Sistemas Ltda.|Method and device for counting objects in image data in frames, computer program product|
US10002271B2|2013-11-04|2018-06-19|Datalogic Usa, Inc.|Data reading system and method for multi-view imaging using an adjustable mirror|
CN104613993B|2014-01-10|2017-01-18|北京兆信信息技术股份有限公司|Method, device and system for product identification|
KR102144125B1|2014-01-14|2020-08-12|한국전자통신연구원|Apparatus and Method Measuring Mail Size and Acquiring Mail Information|
JP5830706B2|2014-01-29|2015-12-09|パナソニックIpマネジメント株式会社|Clerk work management device, clerk work management system, and clerk work management method|
KR102202232B1|2014-02-05|2021-01-13|한국전자통신연구원|System for sorting parcel and method thereof|
US9938092B2|2014-10-03|2018-04-10|Wynright Corporation|Perception-based robotic manipulation system and method for automated truck unloader that unloads/unpacks product from trailers and containers|
WO2016070842A1|2014-11-06|2016-05-12|同方威视技术股份有限公司|Container number recognition method and system|
CN104841641B|2015-05-11|2017-01-04|青岛理工大学|A kind of based on bar code omnibearing stereo formula scanning automatic sorting object device|
JP6086455B2|2015-05-27|2017-03-01|株式会社ニレコ|Fruit and vegetable sorting device and fruit and vegetable sorting method|
CA2931901A1|2015-06-08|2016-12-08|Wal-Mart Stores, Inc.|Systems and methods for controlling checkout belt speed|
US10244149B2|2015-06-09|2019-03-26|Lockheed Martin Corporation|Imaging system with scan line titled off focal plane|
US9418267B1|2015-08-10|2016-08-16|Ground Star Llc|Modular RFID shelving|
CN105425308A|2015-12-18|2016-03-23|同方威视技术股份有限公司|System and method for article tracking|
WO2017120202A1|2016-01-05|2017-07-13|Wal-Mart Stores, Inc.|Apparatus and method for monitoring merchandise|
US10185943B2|2016-02-02|2019-01-22|Walmart Apollo, Llc|Self-deposit apparatus|
JP6679344B2|2016-02-29|2020-04-15|東芝テック株式会社|Weighing system|
CN107661862B|2016-07-27|2020-11-03|边隆祥|Express delivery automatic sorting array type camera system|
CN107273901A|2017-05-05|2017-10-20|苏州亿凌泰克智能科技有限公司|One kind illumination evaluation space method and system|
US11120286B2|2017-06-30|2021-09-14|Panasonic Intellectual Property Management Co., Ltd.|Projection indication device, parcel sorting system, and projection indication method|
US10557738B2|2017-09-11|2020-02-11|Black & Decker Inc.|External fuel metering valve with shuttle mechanism|
CN107716323B|2017-09-13|2019-08-16|立际物流科技(上海)有限公司|A kind of method of flexibility sorter sorting package|
US10789569B1|2017-11-27|2020-09-29|Amazon Technologies, Inc.|System to determine item footprint|
JP2019121118A|2017-12-28|2019-07-22|伊東電機株式会社|Code reader and code reading method|
CN108160530A|2017-12-29|2018-06-15|苏州德创测控科技有限公司|A kind of material loading platform and workpiece feeding method|
US11055659B2|2018-09-21|2021-07-06|Beijing Jingdong Shangke Information Technology Co., Ltd.|System and method for automatic product enrollment|
KR102240192B1|2019-06-14|2021-04-14|권형준|Printed code reader enabling multiple code win multiple focus recognition simultaneously and logistics system using the printed code reader and method for the logistics system|
CN110560911B|2019-08-23|2021-08-17|北京志恒达科技有限公司|Cigarette laser two-dimensional code burning and carving method|
CN110823191B|2019-10-08|2021-12-07|北京空间飞行器总体设计部|Method and system for determining ocean current measurement performance of mixed baseline dual-antenna squint interference SAR|
US11057612B1|2019-11-08|2021-07-06|Tanzle, Inc.|Generating composite stereoscopic images usually visually-demarked regions of surfaces|
法律状态:
2020-09-01| B15I| Others concerning applications: loss of priority|Free format text: PERDA DA PRIORIDADE DE 07/01/2011 US 61/430,804 REIVINDICADA POR NAO ENVIO DE DOCUMENTO COMPROBATORIO DE CESSAO DA MESMA CONFORME AS DISPOSICOES LEGAIS PREVISTAS NO ART. 16, 6O DA LEI 9.279 DE 4/05/1996 (LPI), ITEM 27 DO ATO NORMATIVO 128/1997, ART. 28 DA RESOLUCAO INPI-PR 77/2013 E ART. 3O DA IN 179/2017, UMA VEZ QUE O DEPOSITANTE CONSTANTE DA PETICAO DE REQUERIMENTO DO PEDIDO PCT E DISTINTO DAQUELE QUE DEPOSITOU A PRIORIDADE. |
2020-09-08| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2020-10-13| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-03-16| B06A| Patent application procedure suspended [chapter 6.1 patent gazette]|
2021-07-27| B11B| Dismissal acc. art. 36, par 1 of ipl - no reply within 90 days to fullfil the necessary requirements|
2021-11-23| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
US31325610P| true| 2010-03-12|2010-03-12|
US61/313,256|2010-03-12|
US201161430804P| true| 2011-01-07|2011-01-07|
US61/430,804|2011-01-07|
PCT/US2011/028348|WO2011113044A2|2010-03-12|2011-03-14|System and method for product identification|
[返回顶部]